Test Report: KVM_Linux_crio 19312

                    
                      759e2b673c985a1fcc212824ad6ad48c6b3dc495:2024-07-31:35593
                    
                

Test fail (11/216)

x
+
TestAddons/Setup (2400.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-801478 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-801478 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.949711555s)

                                                
                                                
-- stdout --
	* [addons-801478] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-801478" primary control-plane node in "addons-801478" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image docker.io/registry:2.8.3
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image docker.io/busybox:stable
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-801478 service yakd-dashboard -n yakd-dashboard
	
	* Verifying ingress addon...
	* Verifying registry addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	* Verifying csi-hostpath-driver addon...
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-801478 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, storage-provisioner, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 21:56:31.587392 1180289 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:56:31.587656 1180289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:31.587666 1180289 out.go:304] Setting ErrFile to fd 2...
	I0731 21:56:31.587670 1180289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:31.587855 1180289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 21:56:31.588511 1180289 out.go:298] Setting JSON to false
	I0731 21:56:31.589495 1180289 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":20343,"bootTime":1722442649,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:56:31.589563 1180289 start.go:139] virtualization: kvm guest
	I0731 21:56:31.591668 1180289 out.go:177] * [addons-801478] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:56:31.593001 1180289 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 21:56:31.593058 1180289 notify.go:220] Checking for updates...
	I0731 21:56:31.595525 1180289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:56:31.596927 1180289 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 21:56:31.598090 1180289 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 21:56:31.599344 1180289 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 21:56:31.600574 1180289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 21:56:31.601920 1180289 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:56:31.634682 1180289 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 21:56:31.635907 1180289 start.go:297] selected driver: kvm2
	I0731 21:56:31.635922 1180289 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:56:31.635963 1180289 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 21:56:31.637062 1180289 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:56:31.637160 1180289 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:56:31.653759 1180289 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:56:31.653839 1180289 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:56:31.654073 1180289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 21:56:31.654103 1180289 cni.go:84] Creating CNI manager for ""
	I0731 21:56:31.654121 1180289 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:56:31.654136 1180289 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:56:31.654207 1180289 start.go:340] cluster config:
	{Name:addons-801478 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-801478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:56:31.654300 1180289 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:56:31.656235 1180289 out.go:177] * Starting "addons-801478" primary control-plane node in "addons-801478" cluster
	I0731 21:56:31.657617 1180289 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:56:31.657652 1180289 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 21:56:31.657662 1180289 cache.go:56] Caching tarball of preloaded images
	I0731 21:56:31.657742 1180289 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 21:56:31.657755 1180289 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 21:56:31.658061 1180289 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/config.json ...
	I0731 21:56:31.658085 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/config.json: {Name:mke65f60b4059a95ca305abebf5c67e914780cf0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:31.658274 1180289 start.go:360] acquireMachinesLock for addons-801478: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 21:56:31.658359 1180289 start.go:364] duration metric: took 65.692µs to acquireMachinesLock for "addons-801478"
	I0731 21:56:31.658382 1180289 start.go:93] Provisioning new machine with config: &{Name:addons-801478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-801478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:56:31.658476 1180289 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 21:56:31.660754 1180289 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0731 21:56:31.660917 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:56:31.660962 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:56:31.676118 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37517
	I0731 21:56:31.676620 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:56:31.677275 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:56:31.677299 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:56:31.677666 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:56:31.677865 1180289 main.go:141] libmachine: (addons-801478) Calling .GetMachineName
	I0731 21:56:31.678010 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:31.678146 1180289 start.go:159] libmachine.API.Create for "addons-801478" (driver="kvm2")
	I0731 21:56:31.678179 1180289 client.go:168] LocalClient.Create starting
	I0731 21:56:31.678224 1180289 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 21:56:31.811251 1180289 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 21:56:31.958380 1180289 main.go:141] libmachine: Running pre-create checks...
	I0731 21:56:31.958414 1180289 main.go:141] libmachine: (addons-801478) Calling .PreCreateCheck
	I0731 21:56:31.958990 1180289 main.go:141] libmachine: (addons-801478) Calling .GetConfigRaw
	I0731 21:56:31.959464 1180289 main.go:141] libmachine: Creating machine...
	I0731 21:56:31.959478 1180289 main.go:141] libmachine: (addons-801478) Calling .Create
	I0731 21:56:31.959613 1180289 main.go:141] libmachine: (addons-801478) Creating KVM machine...
	I0731 21:56:31.960882 1180289 main.go:141] libmachine: (addons-801478) DBG | found existing default KVM network
	I0731 21:56:31.961767 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:31.961610 1180311 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012faf0}
	I0731 21:56:31.961830 1180289 main.go:141] libmachine: (addons-801478) DBG | created network xml: 
	I0731 21:56:31.961852 1180289 main.go:141] libmachine: (addons-801478) DBG | <network>
	I0731 21:56:31.961862 1180289 main.go:141] libmachine: (addons-801478) DBG |   <name>mk-addons-801478</name>
	I0731 21:56:31.961872 1180289 main.go:141] libmachine: (addons-801478) DBG |   <dns enable='no'/>
	I0731 21:56:31.961882 1180289 main.go:141] libmachine: (addons-801478) DBG |   
	I0731 21:56:31.961896 1180289 main.go:141] libmachine: (addons-801478) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 21:56:31.961907 1180289 main.go:141] libmachine: (addons-801478) DBG |     <dhcp>
	I0731 21:56:31.961912 1180289 main.go:141] libmachine: (addons-801478) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 21:56:31.961918 1180289 main.go:141] libmachine: (addons-801478) DBG |     </dhcp>
	I0731 21:56:31.961922 1180289 main.go:141] libmachine: (addons-801478) DBG |   </ip>
	I0731 21:56:31.961927 1180289 main.go:141] libmachine: (addons-801478) DBG |   
	I0731 21:56:31.961933 1180289 main.go:141] libmachine: (addons-801478) DBG | </network>
	I0731 21:56:31.961939 1180289 main.go:141] libmachine: (addons-801478) DBG | 
	I0731 21:56:31.967418 1180289 main.go:141] libmachine: (addons-801478) DBG | trying to create private KVM network mk-addons-801478 192.168.39.0/24...
	I0731 21:56:32.037899 1180289 main.go:141] libmachine: (addons-801478) DBG | private KVM network mk-addons-801478 192.168.39.0/24 created
	I0731 21:56:32.037985 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:32.037851 1180311 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 21:56:32.038003 1180289 main.go:141] libmachine: (addons-801478) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478 ...
	I0731 21:56:32.038024 1180289 main.go:141] libmachine: (addons-801478) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:56:32.038038 1180289 main.go:141] libmachine: (addons-801478) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 21:56:32.304656 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:32.304489 1180311 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa...
	I0731 21:56:32.396637 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:32.396448 1180311 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/addons-801478.rawdisk...
	I0731 21:56:32.396674 1180289 main.go:141] libmachine: (addons-801478) DBG | Writing magic tar header
	I0731 21:56:32.396690 1180289 main.go:141] libmachine: (addons-801478) DBG | Writing SSH key tar header
	I0731 21:56:32.396708 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:32.396619 1180311 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478 ...
	I0731 21:56:32.396722 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478
	I0731 21:56:32.396782 1180289 main.go:141] libmachine: (addons-801478) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478 (perms=drwx------)
	I0731 21:56:32.396810 1180289 main.go:141] libmachine: (addons-801478) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 21:56:32.396822 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 21:56:32.396833 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 21:56:32.396839 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 21:56:32.396848 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 21:56:32.396853 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home/jenkins
	I0731 21:56:32.396862 1180289 main.go:141] libmachine: (addons-801478) DBG | Checking permissions on dir: /home
	I0731 21:56:32.396869 1180289 main.go:141] libmachine: (addons-801478) DBG | Skipping /home - not owner
	I0731 21:56:32.396922 1180289 main.go:141] libmachine: (addons-801478) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 21:56:32.396952 1180289 main.go:141] libmachine: (addons-801478) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 21:56:32.396961 1180289 main.go:141] libmachine: (addons-801478) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 21:56:32.396967 1180289 main.go:141] libmachine: (addons-801478) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 21:56:32.396976 1180289 main.go:141] libmachine: (addons-801478) Creating domain...
	I0731 21:56:32.398216 1180289 main.go:141] libmachine: (addons-801478) define libvirt domain using xml: 
	I0731 21:56:32.398242 1180289 main.go:141] libmachine: (addons-801478) <domain type='kvm'>
	I0731 21:56:32.398251 1180289 main.go:141] libmachine: (addons-801478)   <name>addons-801478</name>
	I0731 21:56:32.398259 1180289 main.go:141] libmachine: (addons-801478)   <memory unit='MiB'>4000</memory>
	I0731 21:56:32.398267 1180289 main.go:141] libmachine: (addons-801478)   <vcpu>2</vcpu>
	I0731 21:56:32.398278 1180289 main.go:141] libmachine: (addons-801478)   <features>
	I0731 21:56:32.398286 1180289 main.go:141] libmachine: (addons-801478)     <acpi/>
	I0731 21:56:32.398293 1180289 main.go:141] libmachine: (addons-801478)     <apic/>
	I0731 21:56:32.398302 1180289 main.go:141] libmachine: (addons-801478)     <pae/>
	I0731 21:56:32.398309 1180289 main.go:141] libmachine: (addons-801478)     
	I0731 21:56:32.398316 1180289 main.go:141] libmachine: (addons-801478)   </features>
	I0731 21:56:32.398321 1180289 main.go:141] libmachine: (addons-801478)   <cpu mode='host-passthrough'>
	I0731 21:56:32.398326 1180289 main.go:141] libmachine: (addons-801478)   
	I0731 21:56:32.398338 1180289 main.go:141] libmachine: (addons-801478)   </cpu>
	I0731 21:56:32.398349 1180289 main.go:141] libmachine: (addons-801478)   <os>
	I0731 21:56:32.398359 1180289 main.go:141] libmachine: (addons-801478)     <type>hvm</type>
	I0731 21:56:32.398368 1180289 main.go:141] libmachine: (addons-801478)     <boot dev='cdrom'/>
	I0731 21:56:32.398379 1180289 main.go:141] libmachine: (addons-801478)     <boot dev='hd'/>
	I0731 21:56:32.398390 1180289 main.go:141] libmachine: (addons-801478)     <bootmenu enable='no'/>
	I0731 21:56:32.398400 1180289 main.go:141] libmachine: (addons-801478)   </os>
	I0731 21:56:32.398409 1180289 main.go:141] libmachine: (addons-801478)   <devices>
	I0731 21:56:32.398419 1180289 main.go:141] libmachine: (addons-801478)     <disk type='file' device='cdrom'>
	I0731 21:56:32.398433 1180289 main.go:141] libmachine: (addons-801478)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/boot2docker.iso'/>
	I0731 21:56:32.398453 1180289 main.go:141] libmachine: (addons-801478)       <target dev='hdc' bus='scsi'/>
	I0731 21:56:32.398465 1180289 main.go:141] libmachine: (addons-801478)       <readonly/>
	I0731 21:56:32.398475 1180289 main.go:141] libmachine: (addons-801478)     </disk>
	I0731 21:56:32.398484 1180289 main.go:141] libmachine: (addons-801478)     <disk type='file' device='disk'>
	I0731 21:56:32.398498 1180289 main.go:141] libmachine: (addons-801478)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 21:56:32.398514 1180289 main.go:141] libmachine: (addons-801478)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/addons-801478.rawdisk'/>
	I0731 21:56:32.398529 1180289 main.go:141] libmachine: (addons-801478)       <target dev='hda' bus='virtio'/>
	I0731 21:56:32.398541 1180289 main.go:141] libmachine: (addons-801478)     </disk>
	I0731 21:56:32.398552 1180289 main.go:141] libmachine: (addons-801478)     <interface type='network'>
	I0731 21:56:32.398564 1180289 main.go:141] libmachine: (addons-801478)       <source network='mk-addons-801478'/>
	I0731 21:56:32.398575 1180289 main.go:141] libmachine: (addons-801478)       <model type='virtio'/>
	I0731 21:56:32.398585 1180289 main.go:141] libmachine: (addons-801478)     </interface>
	I0731 21:56:32.398599 1180289 main.go:141] libmachine: (addons-801478)     <interface type='network'>
	I0731 21:56:32.398612 1180289 main.go:141] libmachine: (addons-801478)       <source network='default'/>
	I0731 21:56:32.398622 1180289 main.go:141] libmachine: (addons-801478)       <model type='virtio'/>
	I0731 21:56:32.398630 1180289 main.go:141] libmachine: (addons-801478)     </interface>
	I0731 21:56:32.398640 1180289 main.go:141] libmachine: (addons-801478)     <serial type='pty'>
	I0731 21:56:32.398651 1180289 main.go:141] libmachine: (addons-801478)       <target port='0'/>
	I0731 21:56:32.398661 1180289 main.go:141] libmachine: (addons-801478)     </serial>
	I0731 21:56:32.398682 1180289 main.go:141] libmachine: (addons-801478)     <console type='pty'>
	I0731 21:56:32.398699 1180289 main.go:141] libmachine: (addons-801478)       <target type='serial' port='0'/>
	I0731 21:56:32.398709 1180289 main.go:141] libmachine: (addons-801478)     </console>
	I0731 21:56:32.398715 1180289 main.go:141] libmachine: (addons-801478)     <rng model='virtio'>
	I0731 21:56:32.398722 1180289 main.go:141] libmachine: (addons-801478)       <backend model='random'>/dev/random</backend>
	I0731 21:56:32.398732 1180289 main.go:141] libmachine: (addons-801478)     </rng>
	I0731 21:56:32.398744 1180289 main.go:141] libmachine: (addons-801478)     
	I0731 21:56:32.398757 1180289 main.go:141] libmachine: (addons-801478)     
	I0731 21:56:32.398769 1180289 main.go:141] libmachine: (addons-801478)   </devices>
	I0731 21:56:32.398779 1180289 main.go:141] libmachine: (addons-801478) </domain>
	I0731 21:56:32.398789 1180289 main.go:141] libmachine: (addons-801478) 
	I0731 21:56:32.403363 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:9a:fa:b1 in network default
	I0731 21:56:32.404017 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:32.404036 1180289 main.go:141] libmachine: (addons-801478) Ensuring networks are active...
	I0731 21:56:32.404818 1180289 main.go:141] libmachine: (addons-801478) Ensuring network default is active
	I0731 21:56:32.405136 1180289 main.go:141] libmachine: (addons-801478) Ensuring network mk-addons-801478 is active
	I0731 21:56:32.405627 1180289 main.go:141] libmachine: (addons-801478) Getting domain xml...
	I0731 21:56:32.406469 1180289 main.go:141] libmachine: (addons-801478) Creating domain...
	I0731 21:56:33.617560 1180289 main.go:141] libmachine: (addons-801478) Waiting to get IP...
	I0731 21:56:33.618397 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:33.618793 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:33.618845 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:33.618776 1180311 retry.go:31] will retry after 239.666261ms: waiting for machine to come up
	I0731 21:56:33.860445 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:33.860873 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:33.860906 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:33.860819 1180311 retry.go:31] will retry after 256.091463ms: waiting for machine to come up
	I0731 21:56:34.118290 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:34.118772 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:34.118795 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:34.118731 1180311 retry.go:31] will retry after 296.252551ms: waiting for machine to come up
	I0731 21:56:34.416289 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:34.416738 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:34.416778 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:34.416679 1180311 retry.go:31] will retry after 415.045841ms: waiting for machine to come up
	I0731 21:56:34.833231 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:34.833673 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:34.833703 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:34.833620 1180311 retry.go:31] will retry after 621.062834ms: waiting for machine to come up
	I0731 21:56:35.456353 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:35.456775 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:35.456807 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:35.456729 1180311 retry.go:31] will retry after 894.132495ms: waiting for machine to come up
	I0731 21:56:36.352916 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:36.353331 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:36.353356 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:36.353281 1180311 retry.go:31] will retry after 1.092916989s: waiting for machine to come up
	I0731 21:56:37.448448 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:37.448847 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:37.448888 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:37.448808 1180311 retry.go:31] will retry after 1.238017466s: waiting for machine to come up
	I0731 21:56:38.688247 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:38.688637 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:38.688669 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:38.688582 1180311 retry.go:31] will retry after 1.745580347s: waiting for machine to come up
	I0731 21:56:40.436723 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:40.437223 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:40.437256 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:40.437152 1180311 retry.go:31] will retry after 1.958631732s: waiting for machine to come up
	I0731 21:56:42.397988 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:42.398389 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:42.398416 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:42.398339 1180311 retry.go:31] will retry after 2.412172449s: waiting for machine to come up
	I0731 21:56:44.814082 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:44.814417 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:44.814442 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:44.814402 1180311 retry.go:31] will retry after 2.559266426s: waiting for machine to come up
	I0731 21:56:47.375664 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:47.376163 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find current IP address of domain addons-801478 in network mk-addons-801478
	I0731 21:56:47.376191 1180289 main.go:141] libmachine: (addons-801478) DBG | I0731 21:56:47.376131 1180311 retry.go:31] will retry after 3.996351512s: waiting for machine to come up
	I0731 21:56:51.377247 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.377653 1180289 main.go:141] libmachine: (addons-801478) Found IP for machine: 192.168.39.150
	I0731 21:56:51.377675 1180289 main.go:141] libmachine: (addons-801478) Reserving static IP address...
	I0731 21:56:51.377689 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has current primary IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.378063 1180289 main.go:141] libmachine: (addons-801478) DBG | unable to find host DHCP lease matching {name: "addons-801478", mac: "52:54:00:90:65:63", ip: "192.168.39.150"} in network mk-addons-801478
	I0731 21:56:51.458764 1180289 main.go:141] libmachine: (addons-801478) DBG | Getting to WaitForSSH function...
	I0731 21:56:51.458795 1180289 main.go:141] libmachine: (addons-801478) Reserved static IP address: 192.168.39.150
	I0731 21:56:51.458808 1180289 main.go:141] libmachine: (addons-801478) Waiting for SSH to be available...
	I0731 21:56:51.461525 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.461938 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:65:63}
	I0731 21:56:51.461966 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.462119 1180289 main.go:141] libmachine: (addons-801478) DBG | Using SSH client type: external
	I0731 21:56:51.462162 1180289 main.go:141] libmachine: (addons-801478) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa (-rw-------)
	I0731 21:56:51.462201 1180289 main.go:141] libmachine: (addons-801478) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.150 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 21:56:51.462215 1180289 main.go:141] libmachine: (addons-801478) DBG | About to run SSH command:
	I0731 21:56:51.462229 1180289 main.go:141] libmachine: (addons-801478) DBG | exit 0
	I0731 21:56:51.592056 1180289 main.go:141] libmachine: (addons-801478) DBG | SSH cmd err, output: <nil>: 
	I0731 21:56:51.592383 1180289 main.go:141] libmachine: (addons-801478) KVM machine creation complete!
	I0731 21:56:51.592673 1180289 main.go:141] libmachine: (addons-801478) Calling .GetConfigRaw
	I0731 21:56:51.593261 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:51.593469 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:51.593625 1180289 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 21:56:51.593643 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:56:51.594840 1180289 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 21:56:51.594858 1180289 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 21:56:51.594866 1180289 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 21:56:51.594875 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:51.598705 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.599054 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:51.599085 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.599227 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:51.599449 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.599588 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.599734 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:51.599920 1180289 main.go:141] libmachine: Using SSH client type: native
	I0731 21:56:51.600151 1180289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0731 21:56:51.600165 1180289 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 21:56:51.711200 1180289 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:56:51.711225 1180289 main.go:141] libmachine: Detecting the provisioner...
	I0731 21:56:51.711233 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:51.713990 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.714399 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:51.714429 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.714686 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:51.714867 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.715070 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.715241 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:51.715419 1180289 main.go:141] libmachine: Using SSH client type: native
	I0731 21:56:51.715599 1180289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0731 21:56:51.715610 1180289 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 21:56:51.824538 1180289 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 21:56:51.824658 1180289 main.go:141] libmachine: found compatible host: buildroot
	I0731 21:56:51.824674 1180289 main.go:141] libmachine: Provisioning with buildroot...
	I0731 21:56:51.824686 1180289 main.go:141] libmachine: (addons-801478) Calling .GetMachineName
	I0731 21:56:51.824978 1180289 buildroot.go:166] provisioning hostname "addons-801478"
	I0731 21:56:51.825008 1180289 main.go:141] libmachine: (addons-801478) Calling .GetMachineName
	I0731 21:56:51.825219 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:51.827695 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.828117 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:51.828147 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.828278 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:51.828491 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.828637 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.828767 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:51.828959 1180289 main.go:141] libmachine: Using SSH client type: native
	I0731 21:56:51.829176 1180289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0731 21:56:51.829189 1180289 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-801478 && echo "addons-801478" | sudo tee /etc/hostname
	I0731 21:56:51.953385 1180289 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-801478
	
	I0731 21:56:51.953414 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:51.956222 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.956580 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:51.956612 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:51.956792 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:51.957071 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.957273 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:51.957435 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:51.957608 1180289 main.go:141] libmachine: Using SSH client type: native
	I0731 21:56:51.957822 1180289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0731 21:56:51.957848 1180289 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-801478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-801478/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-801478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 21:56:52.076272 1180289 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 21:56:52.076323 1180289 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 21:56:52.076359 1180289 buildroot.go:174] setting up certificates
	I0731 21:56:52.076375 1180289 provision.go:84] configureAuth start
	I0731 21:56:52.076395 1180289 main.go:141] libmachine: (addons-801478) Calling .GetMachineName
	I0731 21:56:52.076741 1180289 main.go:141] libmachine: (addons-801478) Calling .GetIP
	I0731 21:56:52.079256 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.079610 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.079633 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.079800 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.081790 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.082071 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.082148 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.082209 1180289 provision.go:143] copyHostCerts
	I0731 21:56:52.082291 1180289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 21:56:52.082480 1180289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 21:56:52.082579 1180289 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 21:56:52.082650 1180289 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.addons-801478 san=[127.0.0.1 192.168.39.150 addons-801478 localhost minikube]
	I0731 21:56:52.231798 1180289 provision.go:177] copyRemoteCerts
	I0731 21:56:52.231873 1180289 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 21:56:52.231907 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.234438 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.234699 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.234724 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.234948 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:52.235173 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.235302 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:52.235453 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:56:52.322123 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 21:56:52.347916 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 21:56:52.373708 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 21:56:52.398590 1180289 provision.go:87] duration metric: took 322.194932ms to configureAuth
	I0731 21:56:52.398620 1180289 buildroot.go:189] setting minikube options for container-runtime
	I0731 21:56:52.398835 1180289 config.go:182] Loaded profile config "addons-801478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:56:52.398942 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.401706 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.402069 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.402100 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.402320 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:52.402555 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.402711 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.402870 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:52.403051 1180289 main.go:141] libmachine: Using SSH client type: native
	I0731 21:56:52.403237 1180289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0731 21:56:52.403256 1180289 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 21:56:52.657104 1180289 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 21:56:52.657135 1180289 main.go:141] libmachine: Checking connection to Docker...
	I0731 21:56:52.657148 1180289 main.go:141] libmachine: (addons-801478) Calling .GetURL
	I0731 21:56:52.658593 1180289 main.go:141] libmachine: (addons-801478) DBG | Using libvirt version 6000000
	I0731 21:56:52.660834 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.661205 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.661257 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.661415 1180289 main.go:141] libmachine: Docker is up and running!
	I0731 21:56:52.661430 1180289 main.go:141] libmachine: Reticulating splines...
	I0731 21:56:52.661439 1180289 client.go:171] duration metric: took 20.983249285s to LocalClient.Create
	I0731 21:56:52.661465 1180289 start.go:167] duration metric: took 20.983319443s to libmachine.API.Create "addons-801478"
	I0731 21:56:52.661472 1180289 start.go:293] postStartSetup for "addons-801478" (driver="kvm2")
	I0731 21:56:52.661494 1180289 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 21:56:52.661510 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:52.661786 1180289 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 21:56:52.661814 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.664253 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.664626 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.664656 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.664851 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:52.665055 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.665235 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:52.665385 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:56:52.749979 1180289 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 21:56:52.753868 1180289 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 21:56:52.753905 1180289 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 21:56:52.753995 1180289 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 21:56:52.754018 1180289 start.go:296] duration metric: took 92.540961ms for postStartSetup
	I0731 21:56:52.754059 1180289 main.go:141] libmachine: (addons-801478) Calling .GetConfigRaw
	I0731 21:56:52.754644 1180289 main.go:141] libmachine: (addons-801478) Calling .GetIP
	I0731 21:56:52.757310 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.757701 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.757738 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.757954 1180289 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/config.json ...
	I0731 21:56:52.758149 1180289 start.go:128] duration metric: took 21.099660101s to createHost
	I0731 21:56:52.758174 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.760712 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.761038 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.761069 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.761254 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:52.761475 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.761647 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.761808 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:52.761976 1180289 main.go:141] libmachine: Using SSH client type: native
	I0731 21:56:52.762194 1180289 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0731 21:56:52.762210 1180289 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 21:56:52.872832 1180289 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722463012.846244053
	
	I0731 21:56:52.872856 1180289 fix.go:216] guest clock: 1722463012.846244053
	I0731 21:56:52.872863 1180289 fix.go:229] Guest: 2024-07-31 21:56:52.846244053 +0000 UTC Remote: 2024-07-31 21:56:52.75816235 +0000 UTC m=+21.206684603 (delta=88.081703ms)
	I0731 21:56:52.872887 1180289 fix.go:200] guest clock delta is within tolerance: 88.081703ms
	I0731 21:56:52.872893 1180289 start.go:83] releasing machines lock for "addons-801478", held for 21.214522914s
	I0731 21:56:52.872913 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:52.873207 1180289 main.go:141] libmachine: (addons-801478) Calling .GetIP
	I0731 21:56:52.875766 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.876226 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.876255 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.876466 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:52.877016 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:52.877224 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:56:52.877320 1180289 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 21:56:52.877374 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.877426 1180289 ssh_runner.go:195] Run: cat /version.json
	I0731 21:56:52.877448 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:56:52.880012 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.880317 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.880346 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.880369 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.880499 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:52.880697 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.880783 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:52.880811 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:52.880885 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:52.880994 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:56:52.881068 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:56:52.881173 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:56:52.881339 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:56:52.881500 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:56:52.980068 1180289 ssh_runner.go:195] Run: systemctl --version
	I0731 21:56:52.985984 1180289 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 21:56:53.145327 1180289 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 21:56:53.151280 1180289 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 21:56:53.151365 1180289 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 21:56:53.167229 1180289 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 21:56:53.167262 1180289 start.go:495] detecting cgroup driver to use...
	I0731 21:56:53.167333 1180289 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 21:56:53.183249 1180289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 21:56:53.197122 1180289 docker.go:217] disabling cri-docker service (if available) ...
	I0731 21:56:53.197195 1180289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 21:56:53.210709 1180289 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 21:56:53.224081 1180289 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 21:56:53.337901 1180289 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 21:56:53.509478 1180289 docker.go:233] disabling docker service ...
	I0731 21:56:53.509575 1180289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 21:56:53.532441 1180289 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 21:56:53.544872 1180289 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 21:56:53.658917 1180289 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 21:56:53.765422 1180289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 21:56:53.778915 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 21:56:53.796626 1180289 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 21:56:53.796689 1180289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.807176 1180289 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 21:56:53.807268 1180289 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.817383 1180289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.827930 1180289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.837922 1180289 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 21:56:53.848624 1180289 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.858670 1180289 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.875108 1180289 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 21:56:53.885308 1180289 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 21:56:53.894754 1180289 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 21:56:53.894845 1180289 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 21:56:53.907064 1180289 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 21:56:53.916534 1180289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:56:54.026880 1180289 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 21:56:54.394429 1180289 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 21:56:54.394549 1180289 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 21:56:54.399140 1180289 start.go:563] Will wait 60s for crictl version
	I0731 21:56:54.399221 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:56:54.402973 1180289 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 21:56:54.438379 1180289 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 21:56:54.438514 1180289 ssh_runner.go:195] Run: crio --version
	I0731 21:56:54.469115 1180289 ssh_runner.go:195] Run: crio --version
	I0731 21:56:54.498841 1180289 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 21:56:54.500260 1180289 main.go:141] libmachine: (addons-801478) Calling .GetIP
	I0731 21:56:54.503275 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:54.503693 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:56:54.503719 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:56:54.503928 1180289 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 21:56:54.508070 1180289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:56:54.520294 1180289 kubeadm.go:883] updating cluster {Name:addons-801478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-801478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 21:56:54.520425 1180289 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 21:56:54.520480 1180289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:56:54.551080 1180289 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 21:56:54.551153 1180289 ssh_runner.go:195] Run: which lz4
	I0731 21:56:54.555017 1180289 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 21:56:54.558755 1180289 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 21:56:54.558809 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 21:56:55.755413 1180289 crio.go:462] duration metric: took 1.200429761s to copy over tarball
	I0731 21:56:55.755500 1180289 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 21:56:58.001498 1180289 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.245965142s)
	I0731 21:56:58.001535 1180289 crio.go:469] duration metric: took 2.246085353s to extract the tarball
	I0731 21:56:58.001563 1180289 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 21:56:58.038361 1180289 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 21:56:58.079008 1180289 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 21:56:58.079033 1180289 cache_images.go:84] Images are preloaded, skipping loading
	I0731 21:56:58.079042 1180289 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.30.3 crio true true} ...
	I0731 21:56:58.079168 1180289 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-801478 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-801478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 21:56:58.079241 1180289 ssh_runner.go:195] Run: crio config
	I0731 21:56:58.124191 1180289 cni.go:84] Creating CNI manager for ""
	I0731 21:56:58.124215 1180289 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:56:58.124228 1180289 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 21:56:58.124256 1180289 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-801478 NodeName:addons-801478 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 21:56:58.124436 1180289 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-801478"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 21:56:58.124523 1180289 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 21:56:58.133861 1180289 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 21:56:58.133932 1180289 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 21:56:58.142888 1180289 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 21:56:58.158632 1180289 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 21:56:58.174557 1180289 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0731 21:56:58.191473 1180289 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0731 21:56:58.195231 1180289 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.150	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 21:56:58.207068 1180289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:56:58.322916 1180289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:56:58.339699 1180289 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478 for IP: 192.168.39.150
	I0731 21:56:58.339734 1180289 certs.go:194] generating shared ca certs ...
	I0731 21:56:58.339787 1180289 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.339979 1180289 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 21:56:58.430589 1180289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt ...
	I0731 21:56:58.430621 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt: {Name:mk5770b0bcd3f4bd648e0ccc644a58dbe1587cb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.430797 1180289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key ...
	I0731 21:56:58.430807 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key: {Name:mkf019fdc69ae5aac96df4ade70a8c4f286b4e72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.430881 1180289 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 21:56:58.782251 1180289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt ...
	I0731 21:56:58.782290 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt: {Name:mkeda7d282005d6b0c2d8d077704d38d50eda25a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.782468 1180289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key ...
	I0731 21:56:58.782479 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key: {Name:mkaa5797f0898b5d5fcddd23b73a759d062833f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.782549 1180289 certs.go:256] generating profile certs ...
	I0731 21:56:58.782635 1180289 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/client.key
	I0731 21:56:58.782650 1180289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/client.crt with IP's: []
	I0731 21:56:58.843730 1180289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/client.crt ...
	I0731 21:56:58.843763 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/client.crt: {Name:mk2b745dc5a570410f2ea4ad8bc8fca5043ac867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.843934 1180289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/client.key ...
	I0731 21:56:58.843944 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/client.key: {Name:mk9f37bc10b5f83bb61ca81697af7eef6c4a5cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:58.844009 1180289 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.key.70681657
	I0731 21:56:58.844027 1180289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.crt.70681657 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.150]
	I0731 21:56:59.133938 1180289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.crt.70681657 ...
	I0731 21:56:59.133972 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.crt.70681657: {Name:mkcdc55786c8a9bba24dabbc3a8519a89cd1911c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:59.134155 1180289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.key.70681657 ...
	I0731 21:56:59.134169 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.key.70681657: {Name:mkc2e28a0b797ec4993a9ebf959852d92ba915a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:59.134254 1180289 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.crt.70681657 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.crt
	I0731 21:56:59.134329 1180289 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.key.70681657 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.key
	I0731 21:56:59.134376 1180289 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.key
	I0731 21:56:59.134395 1180289 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.crt with IP's: []
	I0731 21:56:59.562099 1180289 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.crt ...
	I0731 21:56:59.562132 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.crt: {Name:mk9282633bb4b3974dfcfb9c7ae2c2407f256d8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:59.562301 1180289 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.key ...
	I0731 21:56:59.562313 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.key: {Name:mk379e5d47a0ceaeb06bd019c7a589a42947f92e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:59.562489 1180289 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 21:56:59.562526 1180289 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 21:56:59.562548 1180289 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 21:56:59.562570 1180289 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 21:56:59.563196 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 21:56:59.588837 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 21:56:59.612737 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 21:56:59.636791 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 21:56:59.660998 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 21:56:59.684339 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 21:56:59.708014 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 21:56:59.731580 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/addons-801478/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 21:56:59.754920 1180289 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 21:56:59.780276 1180289 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 21:56:59.798281 1180289 ssh_runner.go:195] Run: openssl version
	I0731 21:56:59.815192 1180289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 21:56:59.827583 1180289 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:56:59.833474 1180289 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:56:59.833560 1180289 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 21:56:59.840848 1180289 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 21:56:59.854268 1180289 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 21:56:59.859717 1180289 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 21:56:59.859787 1180289 kubeadm.go:392] StartCluster: {Name:addons-801478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-801478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:56:59.859871 1180289 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 21:56:59.859921 1180289 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 21:56:59.893413 1180289 cri.go:89] found id: ""
	I0731 21:56:59.893494 1180289 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 21:56:59.903005 1180289 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 21:56:59.912880 1180289 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 21:56:59.922318 1180289 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 21:56:59.922354 1180289 kubeadm.go:157] found existing configuration files:
	
	I0731 21:56:59.922415 1180289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 21:56:59.931112 1180289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 21:56:59.931178 1180289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 21:56:59.940047 1180289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 21:56:59.948203 1180289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 21:56:59.948279 1180289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 21:56:59.956767 1180289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 21:56:59.965199 1180289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 21:56:59.965285 1180289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 21:56:59.974628 1180289 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 21:56:59.982862 1180289 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 21:56:59.982920 1180289 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 21:56:59.991621 1180289 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 21:57:00.041028 1180289 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 21:57:00.041086 1180289 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 21:57:00.155634 1180289 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 21:57:00.155785 1180289 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 21:57:00.155915 1180289 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 21:57:00.350474 1180289 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 21:57:00.512604 1180289 out.go:204]   - Generating certificates and keys ...
	I0731 21:57:00.512753 1180289 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 21:57:00.512840 1180289 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 21:57:00.512934 1180289 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 21:57:00.703698 1180289 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 21:57:00.820396 1180289 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 21:57:00.916251 1180289 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 21:57:01.122266 1180289 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 21:57:01.122427 1180289 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-801478 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0731 21:57:01.258227 1180289 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 21:57:01.258445 1180289 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-801478 localhost] and IPs [192.168.39.150 127.0.0.1 ::1]
	I0731 21:57:01.428973 1180289 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 21:57:01.545312 1180289 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 21:57:01.640827 1180289 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 21:57:01.641047 1180289 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 21:57:01.912217 1180289 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 21:57:02.000978 1180289 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 21:57:02.178514 1180289 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 21:57:02.300382 1180289 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 21:57:02.373064 1180289 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 21:57:02.373730 1180289 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 21:57:02.376151 1180289 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 21:57:02.561179 1180289 out.go:204]   - Booting up control plane ...
	I0731 21:57:02.561334 1180289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 21:57:02.561438 1180289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 21:57:02.561561 1180289 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 21:57:02.561724 1180289 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 21:57:02.561846 1180289 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 21:57:02.561922 1180289 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 21:57:02.562106 1180289 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 21:57:02.562205 1180289 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 21:57:03.022660 1180289 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.748762ms
	I0731 21:57:03.022781 1180289 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 21:57:08.520624 1180289 kubeadm.go:310] [api-check] The API server is healthy after 5.501383844s
	I0731 21:57:08.533814 1180289 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 21:57:08.552401 1180289 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 21:57:08.579516 1180289 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 21:57:08.579745 1180289 kubeadm.go:310] [mark-control-plane] Marking the node addons-801478 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 21:57:08.593285 1180289 kubeadm.go:310] [bootstrap-token] Using token: chwbz6.4p0pptm0fu9nfgqt
	I0731 21:57:08.594661 1180289 out.go:204]   - Configuring RBAC rules ...
	I0731 21:57:08.594781 1180289 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 21:57:08.599785 1180289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 21:57:08.610212 1180289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 21:57:08.615494 1180289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 21:57:08.619235 1180289 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 21:57:08.623295 1180289 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 21:57:08.929656 1180289 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 21:57:09.376037 1180289 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 21:57:09.928405 1180289 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 21:57:09.928436 1180289 kubeadm.go:310] 
	I0731 21:57:09.928501 1180289 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 21:57:09.928515 1180289 kubeadm.go:310] 
	I0731 21:57:09.928613 1180289 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 21:57:09.928624 1180289 kubeadm.go:310] 
	I0731 21:57:09.928659 1180289 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 21:57:09.928766 1180289 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 21:57:09.928853 1180289 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 21:57:09.928867 1180289 kubeadm.go:310] 
	I0731 21:57:09.928941 1180289 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 21:57:09.928955 1180289 kubeadm.go:310] 
	I0731 21:57:09.929008 1180289 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 21:57:09.929015 1180289 kubeadm.go:310] 
	I0731 21:57:09.929064 1180289 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 21:57:09.929169 1180289 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 21:57:09.929274 1180289 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 21:57:09.929287 1180289 kubeadm.go:310] 
	I0731 21:57:09.929403 1180289 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 21:57:09.929511 1180289 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 21:57:09.929524 1180289 kubeadm.go:310] 
	I0731 21:57:09.929646 1180289 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token chwbz6.4p0pptm0fu9nfgqt \
	I0731 21:57:09.929786 1180289 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef \
	I0731 21:57:09.929820 1180289 kubeadm.go:310] 	--control-plane 
	I0731 21:57:09.929830 1180289 kubeadm.go:310] 
	I0731 21:57:09.929900 1180289 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 21:57:09.929910 1180289 kubeadm.go:310] 
	I0731 21:57:09.930013 1180289 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token chwbz6.4p0pptm0fu9nfgqt \
	I0731 21:57:09.930148 1180289 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef 
	I0731 21:57:09.930307 1180289 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 21:57:09.930333 1180289 cni.go:84] Creating CNI manager for ""
	I0731 21:57:09.930346 1180289 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:57:09.932083 1180289 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 21:57:09.933398 1180289 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 21:57:09.943445 1180289 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 21:57:09.961246 1180289 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 21:57:09.961342 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:09.961405 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-801478 minikube.k8s.io/updated_at=2024_07_31T21_57_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=addons-801478 minikube.k8s.io/primary=true
	I0731 21:57:09.984897 1180289 ops.go:34] apiserver oom_adj: -16
	I0731 21:57:10.084001 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:10.584776 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:11.084074 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:11.584604 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:12.084936 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:12.584191 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:13.084417 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:13.584944 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:14.084481 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:14.584187 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:15.084499 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:15.584705 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:16.084167 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:16.584807 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:17.084067 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:17.584661 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:18.084775 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:18.584142 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:19.084545 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:19.584273 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:20.084046 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:20.584837 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:21.084727 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:21.584345 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:22.084800 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:22.584230 1180289 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 21:57:22.669902 1180289 kubeadm.go:1113] duration metric: took 12.708634192s to wait for elevateKubeSystemPrivileges
	I0731 21:57:22.669949 1180289 kubeadm.go:394] duration metric: took 22.81016739s to StartCluster
	I0731 21:57:22.669977 1180289 settings.go:142] acquiring lock: {Name:mk076897bfd1af81579aafbccfd5a932e011b343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:57:22.670119 1180289 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 21:57:22.671120 1180289 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:57:22.671400 1180289 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 21:57:22.671773 1180289 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0731 21:57:22.671907 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 21:57:22.672069 1180289 addons.go:69] Setting metrics-server=true in profile "addons-801478"
	I0731 21:57:22.672069 1180289 addons.go:69] Setting ingress=true in profile "addons-801478"
	I0731 21:57:22.672150 1180289 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-801478"
	I0731 21:57:22.672167 1180289 config.go:182] Loaded profile config "addons-801478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:57:22.672122 1180289 addons.go:69] Setting cloud-spanner=true in profile "addons-801478"
	I0731 21:57:22.672184 1180289 addons.go:69] Setting inspektor-gadget=true in profile "addons-801478"
	I0731 21:57:22.672196 1180289 addons.go:69] Setting registry=true in profile "addons-801478"
	I0731 21:57:22.672203 1180289 addons.go:234] Setting addon ingress=true in "addons-801478"
	I0731 21:57:22.672214 1180289 addons.go:234] Setting addon cloud-spanner=true in "addons-801478"
	I0731 21:57:22.672175 1180289 addons.go:69] Setting ingress-dns=true in profile "addons-801478"
	I0731 21:57:22.672226 1180289 addons.go:234] Setting addon registry=true in "addons-801478"
	I0731 21:57:22.672251 1180289 addons.go:234] Setting addon ingress-dns=true in "addons-801478"
	I0731 21:57:22.672260 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672180 1180289 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-801478"
	I0731 21:57:22.672268 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672279 1180289 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-801478"
	I0731 21:57:22.672287 1180289 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-801478"
	I0731 21:57:22.672294 1180289 addons.go:69] Setting storage-provisioner=true in profile "addons-801478"
	I0731 21:57:22.672300 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672331 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672363 1180289 addons.go:69] Setting volcano=true in profile "addons-801478"
	I0731 21:57:22.672388 1180289 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-801478"
	I0731 21:57:22.672261 1180289 addons.go:234] Setting addon inspektor-gadget=true in "addons-801478"
	I0731 21:57:22.672417 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672426 1180289 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-801478"
	I0731 21:57:22.672429 1180289 addons.go:234] Setting addon volcano=true in "addons-801478"
	I0731 21:57:22.672461 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672357 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.672369 1180289 addons.go:234] Setting addon metrics-server=true in "addons-801478"
	I0731 21:57:22.672973 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.673064 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.672533 1180289 addons.go:69] Setting volumesnapshots=true in profile "addons-801478"
	I0731 21:57:22.673068 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.673129 1180289 addons.go:234] Setting addon volumesnapshots=true in "addons-801478"
	I0731 21:57:22.673157 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.673162 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.673171 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.673189 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.673214 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.673093 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.673277 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.673279 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.673274 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.673297 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.673301 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.672160 1180289 addons.go:69] Setting yakd=true in profile "addons-801478"
	I0731 21:57:22.673370 1180289 addons.go:69] Setting helm-tiller=true in profile "addons-801478"
	I0731 21:57:22.673374 1180289 addons.go:69] Setting default-storageclass=true in profile "addons-801478"
	I0731 21:57:22.673394 1180289 addons.go:234] Setting addon yakd=true in "addons-801478"
	I0731 21:57:22.672285 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.673402 1180289 addons.go:234] Setting addon helm-tiller=true in "addons-801478"
	I0731 21:57:22.673410 1180289 addons.go:69] Setting gcp-auth=true in profile "addons-801478"
	I0731 21:57:22.673411 1180289 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-801478"
	I0731 21:57:22.673425 1180289 mustload.go:65] Loading cluster: addons-801478
	I0731 21:57:22.672323 1180289 addons.go:234] Setting addon storage-provisioner=true in "addons-801478"
	I0731 21:57:22.673860 1180289 out.go:177] * Verifying Kubernetes components...
	I0731 21:57:22.673947 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.673969 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674036 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.674041 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.674081 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674103 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.674109 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.674141 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674153 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.674168 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674188 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674297 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.674315 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674343 1180289 config.go:182] Loaded profile config "addons-801478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 21:57:22.674349 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.674363 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.674531 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.674894 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.675394 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.675449 1180289 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 21:57:22.675459 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.695200 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34555
	I0731 21:57:22.695219 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38237
	I0731 21:57:22.695200 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38409
	I0731 21:57:22.695215 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42653
	I0731 21:57:22.695669 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.695914 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.695992 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.696422 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.696446 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.696602 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.696617 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.696650 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.696666 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.696677 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.697009 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.697009 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.697189 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.697206 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.697587 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.697608 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.697608 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.697624 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.697669 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.697678 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.697734 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0731 21:57:22.708505 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.708672 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.708744 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.708971 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.709010 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.712592 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38233
	I0731 21:57:22.712657 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.712693 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.712799 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.713304 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.713409 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.713504 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.714009 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.714177 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.714197 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.714650 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.714692 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.716686 1180289 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-801478"
	I0731 21:57:22.716769 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.717195 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.717273 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.717877 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.718963 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.719053 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.746624 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0731 21:57:22.747218 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.747856 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.747883 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.748541 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.749138 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.749191 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.749405 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0731 21:57:22.749885 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.750421 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.750439 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.750826 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.751435 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.751480 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.751764 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0731 21:57:22.752393 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.752932 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.752954 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.753304 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.753903 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.753942 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.754494 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0731 21:57:22.755161 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.755725 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.755741 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.756119 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.756633 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.756667 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.757830 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I0731 21:57:22.758494 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.759194 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.759213 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.759836 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.760507 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.760549 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.765734 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43735
	I0731 21:57:22.766347 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.766953 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.766977 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.767393 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.768011 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.768054 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.770461 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0731 21:57:22.771080 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.771649 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.771667 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.771981 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43387
	I0731 21:57:22.772210 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.772277 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43399
	I0731 21:57:22.772409 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.773094 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.773133 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.773385 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.773854 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.773872 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.774002 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.774012 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.774226 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39193
	I0731 21:57:22.774432 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.774530 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.774613 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.774813 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.775216 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0731 21:57:22.775695 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.775896 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.776522 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.776542 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.776681 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.776694 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.776908 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.777147 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.777438 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.777685 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.778803 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.779082 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.779629 1180289 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0731 21:57:22.779685 1180289 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0731 21:57:22.780893 1180289 addons.go:234] Setting addon default-storageclass=true in "addons-801478"
	I0731 21:57:22.780943 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.781315 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.781354 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.781462 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0731 21:57:22.781481 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0731 21:57:22.781504 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.781799 1180289 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 21:57:22.781815 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0731 21:57:22.781835 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.785476 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36499
	I0731 21:57:22.785562 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42293
	I0731 21:57:22.786023 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.786106 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.786588 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.786607 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.786686 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.786703 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.786800 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.786865 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.786898 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.787093 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.787254 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.787395 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.787710 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.787729 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.787867 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.787879 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.787937 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.788220 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.788293 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.788463 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.788729 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.789024 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.789085 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.789313 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.790239 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.790284 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.791429 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0731 21:57:22.792555 1180289 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0731 21:57:22.792578 1180289 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0731 21:57:22.792601 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.796005 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.796956 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.797585 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.797618 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.797642 1180289 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0731 21:57:22.797810 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.798023 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.798254 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.798422 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.798979 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I0731 21:57:22.799011 1180289 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0731 21:57:22.799026 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0731 21:57:22.799045 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.799586 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.800347 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46763
	I0731 21:57:22.800585 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.800599 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.800898 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.801110 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.801580 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.801692 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.802037 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.802943 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.803550 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.803577 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.803583 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44153
	I0731 21:57:22.803770 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.803889 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.803973 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.804152 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.804213 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.804370 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.805071 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.805119 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.805362 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.805929 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.805951 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.806381 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.806565 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.807193 1180289 out.go:177]   - Using image docker.io/registry:2.8.3
	I0731 21:57:22.808127 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.808523 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44725
	I0731 21:57:22.809091 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.809614 1180289 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0731 21:57:22.809706 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.809723 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.809614 1180289 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0731 21:57:22.810257 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.810508 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.811309 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41501
	I0731 21:57:22.811861 1180289 out.go:177]   - Using image docker.io/busybox:stable
	I0731 21:57:22.811916 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.812510 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.812529 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.812928 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.813105 1180289 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 21:57:22.813125 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0731 21:57:22.813146 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.813150 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.813808 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0731 21:57:22.814418 1180289 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0731 21:57:22.814458 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.814544 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0731 21:57:22.814685 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.815231 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.815251 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.816352 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.818517 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.818519 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.818562 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.819258 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.819510 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37725
	I0731 21:57:22.819735 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.819793 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.819815 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.819921 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.820020 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.820226 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.820647 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.820666 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.820744 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.821265 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.821440 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.821587 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.821716 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.822233 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.822909 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.823222 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.824389 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0731 21:57:22.824510 1180289 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0731 21:57:22.824789 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.825079 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.825250 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.825264 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.825634 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.826372 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.826795 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.826494 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
	I0731 21:57:22.825723 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.825693 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38603
	I0731 21:57:22.826624 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.827255 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36415
	I0731 21:57:22.827616 1180289 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 21:57:22.827927 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.827974 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.827933 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.828602 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.828625 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.828682 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.828698 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.828756 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0731 21:57:22.828765 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.828779 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.829063 1180289 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:57:22.829079 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 21:57:22.829094 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.829099 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.829716 1180289 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0731 21:57:22.829864 1180289 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 21:57:22.829994 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.830006 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.829996 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.830021 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.829997 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.830570 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.830653 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33403
	I0731 21:57:22.830784 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.831085 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.831449 1180289 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 21:57:22.831480 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0731 21:57:22.831534 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.831622 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.831645 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.831676 1180289 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0731 21:57:22.832683 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.832859 1180289 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0731 21:57:22.832877 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0731 21:57:22.832896 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.833160 1180289 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 21:57:22.833853 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.833882 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.834288 1180289 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0731 21:57:22.834511 1180289 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 21:57:22.834529 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0731 21:57:22.834546 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.834606 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.834994 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.835021 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.835439 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:22.835456 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:22.835702 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:22.835717 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:22.835727 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:22.835734 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:22.835839 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.836230 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.836250 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.836529 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.836650 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.836810 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.836933 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.837049 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.837110 1180289 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0731 21:57:22.837409 1180289 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0731 21:57:22.837509 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.838313 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.838331 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.839595 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.839958 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.840601 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.840620 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.840653 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.840665 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.840848 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.840917 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:22.841310 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:22.841332 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:22.841365 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0731 21:57:22.841611 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.841677 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.841718 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:22.841744 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.842018 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:22.842044 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.842048 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 21:57:22.842140 1180289 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0731 21:57:22.842361 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.842469 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.842574 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.842635 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.842708 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.842842 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.842914 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.842930 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.843107 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.843182 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.843201 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.843323 1180289 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0731 21:57:22.843328 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.843481 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.843581 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.843658 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.843702 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.843814 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.843934 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.844711 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0731 21:57:22.844728 1180289 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0731 21:57:22.844783 1180289 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0731 21:57:22.844808 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.847141 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0731 21:57:22.848269 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.848767 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.848796 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.848953 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.849149 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.849317 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.849470 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.849630 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0731 21:57:22.850170 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0731 21:57:22.850636 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.851151 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.851168 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.851500 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.851747 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:22.851833 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0731 21:57:22.853017 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0731 21:57:22.853335 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:22.853561 1180289 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 21:57:22.853584 1180289 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 21:57:22.853605 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.855538 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0731 21:57:22.856759 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.856821 1180289 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0731 21:57:22.857342 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.857373 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.857514 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.857725 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.857907 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.858055 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.858280 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0731 21:57:22.858295 1180289 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0731 21:57:22.858318 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:22.861751 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.862096 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:22.862130 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:22.862294 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:22.862513 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:22.862665 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:22.862826 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:22.863795 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45083
	I0731 21:57:22.864181 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:22.864763 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:22.864781 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:22.865068 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:22.865270 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	W0731 21:57:22.873173 1180289 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56096->192.168.39.150:22: read: connection reset by peer
	I0731 21:57:22.873211 1180289 retry.go:31] will retry after 280.01905ms: ssh: handshake failed: read tcp 192.168.39.1:56096->192.168.39.150:22: read: connection reset by peer
	I0731 21:57:23.121038 1180289 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0731 21:57:23.121067 1180289 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0731 21:57:23.122542 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0731 21:57:23.159389 1180289 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 21:57:23.159440 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 21:57:23.202598 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0731 21:57:23.214702 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0731 21:57:23.222992 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0731 21:57:23.223029 1180289 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0731 21:57:23.238276 1180289 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0731 21:57:23.238308 1180289 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0731 21:57:23.312029 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 21:57:23.317039 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 21:57:23.351935 1180289 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0731 21:57:23.351962 1180289 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0731 21:57:23.386739 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0731 21:57:23.386771 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0731 21:57:23.412273 1180289 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0731 21:57:23.412305 1180289 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0731 21:57:23.429489 1180289 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0731 21:57:23.429525 1180289 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0731 21:57:23.429662 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0731 21:57:23.485342 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0731 21:57:23.486364 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0731 21:57:23.486397 1180289 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0731 21:57:23.498423 1180289 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0731 21:57:23.498449 1180289 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0731 21:57:23.519564 1180289 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0731 21:57:23.519597 1180289 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0731 21:57:23.641112 1180289 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0731 21:57:23.641135 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0731 21:57:23.645150 1180289 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 21:57:23.645178 1180289 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0731 21:57:23.646638 1180289 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0731 21:57:23.646656 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0731 21:57:23.659309 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0731 21:57:23.659337 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0731 21:57:23.699432 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0731 21:57:23.699473 1180289 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0731 21:57:23.701771 1180289 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0731 21:57:23.701791 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0731 21:57:23.741660 1180289 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0731 21:57:23.741700 1180289 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0731 21:57:23.814594 1180289 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0731 21:57:23.814624 1180289 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0731 21:57:23.851617 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0731 21:57:23.852208 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0731 21:57:23.852229 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0731 21:57:23.856066 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0731 21:57:23.950602 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0731 21:57:23.977138 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0731 21:57:23.977178 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0731 21:57:23.981044 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0731 21:57:23.981081 1180289 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0731 21:57:24.039545 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0731 21:57:24.039582 1180289 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0731 21:57:24.146191 1180289 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:57:24.146221 1180289 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0731 21:57:24.223285 1180289 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0731 21:57:24.223319 1180289 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0731 21:57:24.228247 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0731 21:57:24.228276 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0731 21:57:24.317091 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0731 21:57:24.321672 1180289 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 21:57:24.321705 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0731 21:57:24.389221 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0731 21:57:24.389265 1180289 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0731 21:57:24.516078 1180289 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0731 21:57:24.516119 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0731 21:57:24.634860 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 21:57:24.659778 1180289 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0731 21:57:24.659819 1180289 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0731 21:57:24.742535 1180289 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 21:57:24.742574 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0731 21:57:24.853141 1180289 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0731 21:57:24.853178 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0731 21:57:24.966190 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0731 21:57:25.154833 1180289 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0731 21:57:25.154867 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0731 21:57:25.392557 1180289 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 21:57:25.392591 1180289 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0731 21:57:25.703663 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0731 21:57:29.897121 1180289 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0731 21:57:29.897177 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:29.901073 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:29.901532 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:29.901564 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:29.901795 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:29.902032 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:29.902191 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:29.902324 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:30.261855 1180289 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0731 21:57:30.381816 1180289 addons.go:234] Setting addon gcp-auth=true in "addons-801478"
	I0731 21:57:30.381888 1180289 host.go:66] Checking if "addons-801478" exists ...
	I0731 21:57:30.382318 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:30.382358 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:30.399228 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36323
	I0731 21:57:30.399782 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:30.400318 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:30.400342 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:30.400649 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:30.401247 1180289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 21:57:30.401295 1180289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 21:57:30.418373 1180289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I0731 21:57:30.418909 1180289 main.go:141] libmachine: () Calling .GetVersion
	I0731 21:57:30.419372 1180289 main.go:141] libmachine: Using API Version  1
	I0731 21:57:30.419394 1180289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 21:57:30.419721 1180289 main.go:141] libmachine: () Calling .GetMachineName
	I0731 21:57:30.419901 1180289 main.go:141] libmachine: (addons-801478) Calling .GetState
	I0731 21:57:30.421473 1180289 main.go:141] libmachine: (addons-801478) Calling .DriverName
	I0731 21:57:30.421736 1180289 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0731 21:57:30.421769 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHHostname
	I0731 21:57:30.424406 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:30.424853 1180289 main.go:141] libmachine: (addons-801478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:65:63", ip: ""} in network mk-addons-801478: {Iface:virbr1 ExpiryTime:2024-07-31 22:56:45 +0000 UTC Type:0 Mac:52:54:00:90:65:63 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:addons-801478 Clientid:01:52:54:00:90:65:63}
	I0731 21:57:30.424884 1180289 main.go:141] libmachine: (addons-801478) DBG | domain addons-801478 has defined IP address 192.168.39.150 and MAC address 52:54:00:90:65:63 in network mk-addons-801478
	I0731 21:57:30.425074 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHPort
	I0731 21:57:30.425252 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHKeyPath
	I0731 21:57:30.425411 1180289 main.go:141] libmachine: (addons-801478) Calling .GetSSHUsername
	I0731 21:57:30.425593 1180289 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/addons-801478/id_rsa Username:docker}
	I0731 21:57:30.737713 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.615127788s)
	I0731 21:57:30.737754 1180289 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.578315892s)
	I0731 21:57:30.737778 1180289 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.578309362s)
	I0731 21:57:30.737804 1180289 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 21:57:30.737827 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.535188669s)
	I0731 21:57:30.737785 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.737885 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.737915 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.523188791s)
	I0731 21:57:30.737938 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.737950 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.737860 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.737972 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738000 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.425941743s)
	I0731 21:57:30.738042 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738053 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738071 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.421004456s)
	I0731 21:57:30.738094 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738102 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738148 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.308465695s)
	I0731 21:57:30.738169 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738177 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738283 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.252900409s)
	I0731 21:57:30.738299 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738307 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738665 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (6.887013464s)
	I0731 21:57:30.738693 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738705 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738784 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.882694268s)
	I0731 21:57:30.738817 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738830 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.738916 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.78827307s)
	I0731 21:57:30.738931 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.738940 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.739003 1180289 node_ready.go:35] waiting up to 6m0s for node "addons-801478" to be "Ready" ...
	I0731 21:57:30.739026 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.421908279s)
	I0731 21:57:30.739043 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.739051 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.739177 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.104282924s)
	W0731 21:57:30.739214 1180289 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 21:57:30.739233 1180289 retry.go:31] will retry after 226.639109ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0731 21:57:30.739299 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.773064418s)
	I0731 21:57:30.739320 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.739328 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.740305 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.740335 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.740344 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.740348 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.740358 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.740365 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.740378 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.740386 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.740395 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.740406 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.740507 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.740552 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.740563 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.740576 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.740585 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.740656 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.740682 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.740689 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.740696 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.740718 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.740757 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.740798 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.740805 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.740813 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.740824 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.740968 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.741008 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.741016 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.741023 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.741030 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.741084 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.741106 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.741112 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.741120 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.741127 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.741176 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.741196 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.741202 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.741210 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.741217 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.741257 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.741280 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.741290 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.741297 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.741304 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.741547 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.741582 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.741588 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.741596 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.741603 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.741653 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.741675 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.741681 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.741688 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.741697 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.743498 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.743530 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.743537 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.743681 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.743705 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.743712 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.743793 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.743812 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.743820 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.743829 1180289 addons.go:475] Verifying addon ingress=true in "addons-801478"
	I0731 21:57:30.743958 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.743988 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.744023 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.744031 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.744204 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.744219 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.744228 1180289 addons.go:475] Verifying addon metrics-server=true in "addons-801478"
	I0731 21:57:30.744285 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.744299 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.744491 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.744524 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.744550 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.744558 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.744565 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.744570 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.745515 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.745550 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.745560 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.745703 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.745713 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.745716 1180289 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-801478 service yakd-dashboard -n yakd-dashboard
	
	I0731 21:57:30.745753 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.745761 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.745772 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.745781 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.745791 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.745800 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.745752 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.745734 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.745923 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.746173 1180289 out.go:177] * Verifying ingress addon...
	I0731 21:57:30.746533 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.746555 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.746568 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.746577 1180289 addons.go:475] Verifying addon registry=true in "addons-801478"
	I0731 21:57:30.748689 1180289 out.go:177] * Verifying registry addon...
	I0731 21:57:30.749833 1180289 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0731 21:57:30.750823 1180289 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0731 21:57:30.780969 1180289 node_ready.go:49] node "addons-801478" has status "Ready":"True"
	I0731 21:57:30.781001 1180289 node_ready.go:38] duration metric: took 41.975348ms for node "addons-801478" to be "Ready" ...
	I0731 21:57:30.781015 1180289 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:57:30.796604 1180289 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0731 21:57:30.796643 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:30.796754 1180289 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0731 21:57:30.796774 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:30.832908 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.832939 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.832974 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:30.832997 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:30.833351 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.833372 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:30.833381 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.833386 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:30.833392 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:30.833404 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	W0731 21:57:30.833517 1180289 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0731 21:57:30.845900 1180289 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-gz2sj" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.880997 1180289 pod_ready.go:92] pod "coredns-7db6d8ff4d-gz2sj" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:30.881025 1180289 pod_ready.go:81] duration metric: took 35.09405ms for pod "coredns-7db6d8ff4d-gz2sj" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.881036 1180289 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hb8hn" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.905091 1180289 pod_ready.go:92] pod "coredns-7db6d8ff4d-hb8hn" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:30.905117 1180289 pod_ready.go:81] duration metric: took 24.074458ms for pod "coredns-7db6d8ff4d-hb8hn" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.905131 1180289 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.915276 1180289 pod_ready.go:92] pod "etcd-addons-801478" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:30.915301 1180289 pod_ready.go:81] duration metric: took 10.162941ms for pod "etcd-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.915311 1180289 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.927768 1180289 pod_ready.go:92] pod "kube-apiserver-addons-801478" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:30.927796 1180289 pod_ready.go:81] duration metric: took 12.47694ms for pod "kube-apiserver-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.927812 1180289 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:30.966106 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0731 21:57:31.146786 1180289 pod_ready.go:92] pod "kube-controller-manager-addons-801478" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:31.146815 1180289 pod_ready.go:81] duration metric: took 218.994745ms for pod "kube-controller-manager-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:31.146831 1180289 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7d5l2" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:31.244296 1180289 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-801478" context rescaled to 1 replicas
	I0731 21:57:31.265397 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:31.278716 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:31.545735 1180289 pod_ready.go:92] pod "kube-proxy-7d5l2" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:31.545769 1180289 pod_ready.go:81] duration metric: took 398.926091ms for pod "kube-proxy-7d5l2" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:31.545783 1180289 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:31.809953 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:31.810357 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:31.834705 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.130978881s)
	I0731 21:57:31.834769 1180289 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.413005581s)
	I0731 21:57:31.834787 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:31.834807 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:31.835156 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:31.835174 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:31.835185 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:31.835193 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:31.835466 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:31.835526 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:31.835554 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:31.835566 1180289 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-801478"
	I0731 21:57:31.836887 1180289 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0731 21:57:31.836964 1180289 out.go:177] * Verifying csi-hostpath-driver addon...
	I0731 21:57:31.838546 1180289 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0731 21:57:31.839575 1180289 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0731 21:57:31.839727 1180289 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0731 21:57:31.839742 1180289 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0731 21:57:31.883978 1180289 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0731 21:57:31.884010 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:31.919452 1180289 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0731 21:57:31.919479 1180289 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0731 21:57:31.945318 1180289 pod_ready.go:92] pod "kube-scheduler-addons-801478" in "kube-system" namespace has status "Ready":"True"
	I0731 21:57:31.945343 1180289 pod_ready.go:81] duration metric: took 399.551794ms for pod "kube-scheduler-addons-801478" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:31.945354 1180289 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace to be "Ready" ...
	I0731 21:57:31.995323 1180289 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 21:57:31.995346 1180289 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0731 21:57:32.056208 1180289 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0731 21:57:32.256757 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:32.260822 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:32.348943 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:32.753739 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:32.761211 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:32.857287 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:33.054142 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.087979672s)
	I0731 21:57:33.054206 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:33.054222 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:33.054558 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:33.054595 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:33.054610 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:33.054624 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:33.054634 1180289 main.go:141] libmachine: (addons-801478) DBG | Closing plugin on server side
	I0731 21:57:33.054984 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:33.055006 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:33.268657 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:33.270334 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:33.362655 1180289 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.306375434s)
	I0731 21:57:33.362721 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:33.362741 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:33.363072 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:33.363098 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:33.363110 1180289 main.go:141] libmachine: Making call to close driver server
	I0731 21:57:33.363119 1180289 main.go:141] libmachine: (addons-801478) Calling .Close
	I0731 21:57:33.363444 1180289 main.go:141] libmachine: Successfully made call to close driver server
	I0731 21:57:33.363463 1180289 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 21:57:33.365610 1180289 addons.go:475] Verifying addon gcp-auth=true in "addons-801478"
	I0731 21:57:33.368079 1180289 out.go:177] * Verifying gcp-auth addon...
	I0731 21:57:33.369971 1180289 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0731 21:57:33.403106 1180289 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0731 21:57:33.403128 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:33.408040 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:33.756286 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:33.756777 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:33.845395 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:33.874237 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:33.954587 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:34.262557 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:34.263123 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:34.346428 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:34.373789 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:34.754098 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:34.757976 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:34.846323 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:34.874151 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:35.256804 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:35.257000 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:35.346678 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:35.373890 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:35.753864 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:35.755923 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:35.844537 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:35.873486 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:36.256194 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:36.256513 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:36.345266 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:36.374419 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:36.451296 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:36.754030 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:36.755832 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:36.845386 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:36.873929 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:37.256124 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:37.258964 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:37.345076 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:37.373391 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:37.755552 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:37.757271 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:37.845876 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:37.873812 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:38.254304 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:38.256429 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:38.344950 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:38.374263 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:38.755022 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:38.756077 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:38.844881 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:38.874161 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:38.952412 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:39.255432 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:39.256321 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:39.344864 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:39.374280 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:39.753965 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:39.755128 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:39.848413 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:39.874007 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:40.266515 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:40.266885 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:40.344511 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:40.373894 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:40.754078 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:40.756307 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:40.844816 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:40.873999 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:41.254218 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:41.257469 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:41.345099 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:41.373437 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:41.451944 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:41.764274 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:41.764279 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:41.844775 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:41.873943 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:42.255432 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:42.256271 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:42.349903 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:42.374003 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:42.754400 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:42.756570 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:42.844995 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:42.873958 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:43.255455 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:43.255961 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:43.344685 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:43.373563 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:43.454868 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:43.755049 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:43.755399 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:43.844820 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:43.874133 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:44.255235 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:44.257604 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:44.345040 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:44.373757 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:44.756477 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:44.757100 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:44.845650 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:44.874286 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:45.254223 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:45.255365 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:45.344936 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:45.373465 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:45.756426 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:45.758483 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:45.936060 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:45.936962 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:45.950944 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:46.255620 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:46.256945 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:46.346106 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:46.374120 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:46.754380 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:46.755572 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:46.845478 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:46.873654 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:47.255330 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:47.259287 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:47.345314 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:47.373850 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:47.756392 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:47.758077 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:47.847940 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:47.874092 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:47.952797 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:48.255609 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:48.255805 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:48.346876 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:48.373517 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:48.757186 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:48.757549 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:48.845118 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:48.874369 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:49.255440 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:49.255787 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:49.345414 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:49.373197 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:49.756390 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:49.756435 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:49.845090 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:49.873848 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:50.255585 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:50.255703 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:50.345871 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:50.374906 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:50.451170 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:50.757377 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:50.763421 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:50.845263 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:50.873583 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:51.255742 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:51.255841 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:51.346018 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:51.375211 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:51.758464 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:51.759479 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:51.845117 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:51.873729 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:52.255483 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:52.258414 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:52.345649 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:52.374006 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:52.451512 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:52.757031 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:52.758191 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:52.845388 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:52.873647 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:53.256198 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:53.257580 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:53.346782 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:53.373812 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:53.754731 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:53.755762 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:53.845797 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:53.874035 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:54.258952 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:54.260275 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:54.345489 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:54.374132 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:54.455978 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:54.754062 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:54.756059 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:54.845155 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:54.873651 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:55.255911 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:55.257688 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:55.345461 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:55.373863 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:55.756445 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:55.756741 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:55.848402 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:55.875279 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:56.388870 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:56.389325 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:56.389669 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:56.392576 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:56.753975 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:56.756586 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:56.846756 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:56.874282 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:56.951296 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:57.254879 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:57.256064 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:57.344744 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:57.374176 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:57.755099 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:57.756240 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:57.845335 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:57.874451 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:58.256166 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:58.256259 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:58.345439 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:58.373153 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:58.754228 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:58.755880 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:58.844298 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:58.873762 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:59.255712 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:59.259047 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:59.346059 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:59.374061 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:57:59.451393 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:57:59.756255 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:57:59.763615 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:57:59.845496 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:57:59.873442 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:00.256796 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:00.258145 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:00.345011 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:00.373549 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:00.754174 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:00.756873 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:00.845500 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:00.873519 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:01.260073 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:01.260154 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:01.345422 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:01.374466 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:01.451698 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:01.755231 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:01.758480 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:01.846392 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:01.874078 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:02.254525 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:02.254959 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:02.344969 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:02.374524 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:02.754279 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:02.755652 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:02.845280 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:02.873544 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:03.255393 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:03.256642 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:03.345392 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:03.373609 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:03.760175 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:03.760174 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:03.844467 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:03.874013 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:03.955163 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:04.254276 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:04.255594 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:04.345323 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:04.373560 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:04.756150 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:04.756373 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:04.844437 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:04.874449 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:05.255554 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:05.256483 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:05.345266 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:05.373922 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:05.755355 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:05.755358 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:05.844804 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:05.873250 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:06.254109 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:06.256478 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:06.344577 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:06.373690 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:06.451928 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:06.898142 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:06.900287 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:06.900520 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:06.902441 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:07.253998 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:07.254890 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:07.345254 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:07.373811 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:07.755502 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:07.757766 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:07.844873 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:07.873773 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:08.253986 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:08.256894 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:08.345072 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:08.373238 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:08.460987 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:08.755764 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:08.760551 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:08.846116 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:08.874189 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:09.255067 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:09.256508 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:09.345342 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:09.373427 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:09.755233 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0731 21:58:09.756475 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:09.845754 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:09.876243 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:10.255474 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:10.255646 1180289 kapi.go:107] duration metric: took 39.504822184s to wait for kubernetes.io/minikube-addons=registry ...
	I0731 21:58:10.345454 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:10.374165 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:10.754290 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:10.845592 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:10.874117 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:10.951369 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:11.254295 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:11.344555 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:11.374116 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:11.754681 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:11.844984 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:11.874184 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:12.255408 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:12.344575 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:12.373990 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:12.756768 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:12.844907 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:12.873244 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:12.951564 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:13.254208 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:13.345368 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:13.373893 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:13.754403 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:13.848931 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:13.873549 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:14.255944 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:14.345112 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:14.373763 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:14.754198 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:14.844331 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:14.873860 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:15.254356 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:15.354893 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:15.374646 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:15.450792 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:15.754900 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:15.844903 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:15.873515 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:16.254010 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:16.345002 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:16.374226 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:16.753424 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:16.844566 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:16.873759 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:17.398496 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:17.398751 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:17.398997 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:17.456974 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:17.756925 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:17.854218 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:17.874013 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:18.257009 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:18.347595 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:18.374047 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:18.753703 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:18.850881 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:18.873330 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:19.254610 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:19.345364 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:19.374585 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:19.755900 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:19.847236 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:19.873850 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:19.951124 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:20.430291 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:20.431833 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:20.434590 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:20.754625 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:20.845093 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:20.873550 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:21.254887 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:21.345014 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:21.373481 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:21.754598 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:21.847077 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:21.874158 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:22.253898 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:22.344787 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:22.374258 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:22.452277 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:22.755460 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:22.845104 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:22.874197 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:23.254970 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:23.346762 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:23.373992 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:23.754914 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:23.851975 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:23.875292 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:24.257713 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:24.806224 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:24.814084 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:24.817724 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:24.832879 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:24.850808 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:24.875056 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:25.253448 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:25.346469 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:25.373734 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:25.754913 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:25.844995 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:25.873286 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:26.268299 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:26.345072 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:26.374729 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:26.755866 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:26.845190 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:26.873521 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:26.952235 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:27.254104 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:27.346384 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:27.375062 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:27.755455 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:27.847920 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:27.873739 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:28.253880 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:28.347508 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:28.375492 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:28.753933 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:28.845045 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:28.873974 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:28.966271 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:29.255220 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:29.345562 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:29.373962 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:29.755683 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:29.845026 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:29.874379 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:30.254821 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:30.345345 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:30.382764 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:30.754402 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:30.847180 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:30.873683 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:31.254548 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:31.345023 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:31.373326 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:31.451455 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:31.754034 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:31.845313 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:31.874416 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:32.255761 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:32.351270 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:32.383535 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:32.754485 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:32.845405 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:32.874322 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:33.255402 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:33.348974 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:33.374948 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:33.754619 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:33.845705 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:33.873726 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:33.951068 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:34.254334 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:34.345233 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:34.373541 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:34.761110 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:34.846307 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:34.873907 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:35.254322 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:35.344888 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:35.374310 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:35.754479 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:35.847756 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:35.873846 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:36.254936 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:36.345539 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0731 21:58:36.373639 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:36.451879 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:36.754594 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:36.845188 1180289 kapi.go:107] duration metric: took 1m5.005611767s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0731 21:58:36.873344 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:37.255927 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:37.380999 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:37.919378 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:37.921102 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:38.254716 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:38.374029 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:38.754236 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:38.874058 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:38.951987 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:39.254548 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:39.375236 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:39.754144 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:39.873884 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:40.329962 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:40.373563 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:40.754204 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:40.873436 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:41.255237 1180289 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0731 21:58:41.373895 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:41.452173 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:41.757166 1180289 kapi.go:107] duration metric: took 1m11.007343131s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0731 21:58:41.873906 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:42.373748 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:42.873578 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:43.373152 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:43.874075 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:43.952478 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:44.373955 1180289 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0731 21:58:44.874419 1180289 kapi.go:107] duration metric: took 1m11.504444836s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0731 21:58:44.876174 1180289 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-801478 cluster.
	I0731 21:58:44.877370 1180289 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0731 21:58:44.878459 1180289 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0731 21:58:44.879816 1180289 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, storage-provisioner, helm-tiller, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0731 21:58:44.880964 1180289 addons.go:510] duration metric: took 1m22.209241689s for enable addons: enabled=[ingress-dns nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget storage-provisioner helm-tiller yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0731 21:58:46.451511 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:48.451619 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:50.452205 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:52.452340 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:54.951585 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:57.450997 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:58:59.451812 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:01.951182 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:03.951667 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:05.952424 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:08.451441 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:10.451892 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:12.952506 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:15.451346 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:17.951771 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:19.952736 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:22.451805 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:24.954296 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:27.451942 1180289 pod_ready.go:102] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"False"
	I0731 21:59:28.952893 1180289 pod_ready.go:92] pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace has status "Ready":"True"
	I0731 21:59:28.952925 1180289 pod_ready.go:81] duration metric: took 1m57.007564981s for pod "metrics-server-c59844bb4-7bqr8" in "kube-system" namespace to be "Ready" ...
	I0731 21:59:28.952936 1180289 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-bwnfv" in "kube-system" namespace to be "Ready" ...
	I0731 21:59:28.957406 1180289 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-bwnfv" in "kube-system" namespace has status "Ready":"True"
	I0731 21:59:28.957435 1180289 pod_ready.go:81] duration metric: took 4.491906ms for pod "nvidia-device-plugin-daemonset-bwnfv" in "kube-system" namespace to be "Ready" ...
	I0731 21:59:28.957453 1180289 pod_ready.go:38] duration metric: took 1m58.176423438s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 21:59:28.957474 1180289 api_server.go:52] waiting for apiserver process to appear ...
	I0731 21:59:28.957516 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:59:28.957579 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:59:28.999430 1180289 cri.go:89] found id: "fab4d25745e5f6856297a717602df950c877dd91790628668c3e5911a1491259"
	I0731 21:59:28.999463 1180289 cri.go:89] found id: ""
	I0731 21:59:28.999475 1180289 logs.go:276] 1 containers: [fab4d25745e5f6856297a717602df950c877dd91790628668c3e5911a1491259]
	I0731 21:59:28.999538 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:29.003687 1180289 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:59:29.003759 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:59:29.040762 1180289 cri.go:89] found id: "51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65"
	I0731 21:59:29.040789 1180289 cri.go:89] found id: ""
	I0731 21:59:29.040797 1180289 logs.go:276] 1 containers: [51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65]
	I0731 21:59:29.040852 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:29.044461 1180289 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:59:29.044528 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:59:29.083567 1180289 cri.go:89] found id: "9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8"
	I0731 21:59:29.083592 1180289 cri.go:89] found id: ""
	I0731 21:59:29.083601 1180289 logs.go:276] 1 containers: [9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8]
	I0731 21:59:29.083653 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:29.087798 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:59:29.087878 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:59:29.134728 1180289 cri.go:89] found id: "7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8"
	I0731 21:59:29.134758 1180289 cri.go:89] found id: ""
	I0731 21:59:29.134767 1180289 logs.go:276] 1 containers: [7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8]
	I0731 21:59:29.134832 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:29.139925 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:59:29.139996 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:59:29.176655 1180289 cri.go:89] found id: "d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18"
	I0731 21:59:29.176676 1180289 cri.go:89] found id: ""
	I0731 21:59:29.176684 1180289 logs.go:276] 1 containers: [d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18]
	I0731 21:59:29.176728 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:29.181115 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:59:29.181189 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:59:29.221015 1180289 cri.go:89] found id: "8308c18685c87c162a571a01c3b608c6c251e82191d5f3d3b19d28406d05a76b"
	I0731 21:59:29.221043 1180289 cri.go:89] found id: ""
	I0731 21:59:29.221053 1180289 logs.go:276] 1 containers: [8308c18685c87c162a571a01c3b608c6c251e82191d5f3d3b19d28406d05a76b]
	I0731 21:59:29.221105 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:29.225168 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:59:29.225245 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:59:29.269905 1180289 cri.go:89] found id: ""
	I0731 21:59:29.269938 1180289 logs.go:276] 0 containers: []
	W0731 21:59:29.269947 1180289 logs.go:278] No container was found matching "kindnet"
	I0731 21:59:29.269959 1180289 logs.go:123] Gathering logs for describe nodes ...
	I0731 21:59:29.269972 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0731 21:59:29.469248 1180289 logs.go:123] Gathering logs for kube-apiserver [fab4d25745e5f6856297a717602df950c877dd91790628668c3e5911a1491259] ...
	I0731 21:59:29.469282 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fab4d25745e5f6856297a717602df950c877dd91790628668c3e5911a1491259"
	I0731 21:59:29.515839 1180289 logs.go:123] Gathering logs for kube-proxy [d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18] ...
	I0731 21:59:29.515875 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18"
	I0731 21:59:29.552331 1180289 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:59:29.552367 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 21:59:30.514870 1180289 logs.go:123] Gathering logs for container status ...
	I0731 21:59:30.514948 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0731 21:59:30.563555 1180289 logs.go:123] Gathering logs for kubelet ...
	I0731 21:59:30.563597 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0731 21:59:30.620690 1180289 logs.go:138] Found kubelet problem: Jul 31 21:57:26 addons-801478 kubelet[1277]: W0731 21:57:26.725308    1277 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-801478" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-801478' and this object
	W0731 21:59:30.620952 1180289 logs.go:138] Found kubelet problem: Jul 31 21:57:26 addons-801478 kubelet[1277]: E0731 21:57:26.725361    1277 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-801478" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-801478' and this object
	I0731 21:59:30.654741 1180289 logs.go:123] Gathering logs for dmesg ...
	I0731 21:59:30.654793 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 21:59:30.669675 1180289 logs.go:123] Gathering logs for etcd [51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65] ...
	I0731 21:59:30.669720 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65"
	I0731 21:59:30.731060 1180289 logs.go:123] Gathering logs for coredns [9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8] ...
	I0731 21:59:30.731101 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8"
	I0731 21:59:30.768742 1180289 logs.go:123] Gathering logs for kube-scheduler [7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8] ...
	I0731 21:59:30.768772 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8"
	I0731 21:59:30.814408 1180289 logs.go:123] Gathering logs for kube-controller-manager [8308c18685c87c162a571a01c3b608c6c251e82191d5f3d3b19d28406d05a76b] ...
	I0731 21:59:30.814458 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8308c18685c87c162a571a01c3b608c6c251e82191d5f3d3b19d28406d05a76b"
	I0731 21:59:30.887395 1180289 out.go:304] Setting ErrFile to fd 2...
	I0731 21:59:30.887434 1180289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0731 21:59:30.887503 1180289 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0731 21:59:30.887515 1180289 out.go:239]   Jul 31 21:57:26 addons-801478 kubelet[1277]: W0731 21:57:26.725308    1277 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-801478" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-801478' and this object
	  Jul 31 21:57:26 addons-801478 kubelet[1277]: W0731 21:57:26.725308    1277 reflector.go:547] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-801478" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-801478' and this object
	W0731 21:59:30.887536 1180289 out.go:239]   Jul 31 21:57:26 addons-801478 kubelet[1277]: E0731 21:57:26.725361    1277 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-801478" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-801478' and this object
	  Jul 31 21:57:26 addons-801478 kubelet[1277]: E0731 21:57:26.725361    1277 reflector.go:150] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-801478" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-801478' and this object
	I0731 21:59:30.887553 1180289 out.go:304] Setting ErrFile to fd 2...
	I0731 21:59:30.887560 1180289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:59:40.888355 1180289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 21:59:40.907020 1180289 api_server.go:72] duration metric: took 2m18.235527327s to wait for apiserver process to appear ...
	I0731 21:59:40.907060 1180289 api_server.go:88] waiting for apiserver healthz status ...
	I0731 21:59:40.907108 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 21:59:40.907165 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 21:59:40.941915 1180289 cri.go:89] found id: "fab4d25745e5f6856297a717602df950c877dd91790628668c3e5911a1491259"
	I0731 21:59:40.941949 1180289 cri.go:89] found id: ""
	I0731 21:59:40.941960 1180289 logs.go:276] 1 containers: [fab4d25745e5f6856297a717602df950c877dd91790628668c3e5911a1491259]
	I0731 21:59:40.942026 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:40.945875 1180289 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 21:59:40.945944 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 21:59:40.986232 1180289 cri.go:89] found id: "51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65"
	I0731 21:59:40.986267 1180289 cri.go:89] found id: ""
	I0731 21:59:40.986280 1180289 logs.go:276] 1 containers: [51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65]
	I0731 21:59:40.986353 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:40.990349 1180289 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 21:59:40.990431 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 21:59:41.024485 1180289 cri.go:89] found id: "9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8"
	I0731 21:59:41.024508 1180289 cri.go:89] found id: ""
	I0731 21:59:41.024516 1180289 logs.go:276] 1 containers: [9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8]
	I0731 21:59:41.024569 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:41.028743 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 21:59:41.028836 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 21:59:41.070956 1180289 cri.go:89] found id: "7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8"
	I0731 21:59:41.070980 1180289 cri.go:89] found id: ""
	I0731 21:59:41.070990 1180289 logs.go:276] 1 containers: [7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8]
	I0731 21:59:41.071055 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:41.074871 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 21:59:41.074943 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 21:59:41.110913 1180289 cri.go:89] found id: "d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18"
	I0731 21:59:41.110944 1180289 cri.go:89] found id: ""
	I0731 21:59:41.110953 1180289 logs.go:276] 1 containers: [d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18]
	I0731 21:59:41.111015 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:41.115015 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 21:59:41.115080 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 21:59:41.154305 1180289 cri.go:89] found id: "8308c18685c87c162a571a01c3b608c6c251e82191d5f3d3b19d28406d05a76b"
	I0731 21:59:41.154335 1180289 cri.go:89] found id: ""
	I0731 21:59:41.154344 1180289 logs.go:276] 1 containers: [8308c18685c87c162a571a01c3b608c6c251e82191d5f3d3b19d28406d05a76b]
	I0731 21:59:41.154397 1180289 ssh_runner.go:195] Run: which crictl
	I0731 21:59:41.159676 1180289 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 21:59:41.159744 1180289 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 21:59:41.196175 1180289 cri.go:89] found id: ""
	I0731 21:59:41.196206 1180289 logs.go:276] 0 containers: []
	W0731 21:59:41.196215 1180289 logs.go:278] No container was found matching "kindnet"
	I0731 21:59:41.196228 1180289 logs.go:123] Gathering logs for etcd [51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65] ...
	I0731 21:59:41.196243 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51187821daa3ce8f094d2968c8aff3151ef9a841f337f7dcaaff9ec3b2433c65"
	I0731 21:59:41.250928 1180289 logs.go:123] Gathering logs for coredns [9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8] ...
	I0731 21:59:41.251040 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a43205fec093e42ca052676505e33cc6c584d03a59b93f7770a2ef6ead658a8"
	I0731 21:59:41.286940 1180289 logs.go:123] Gathering logs for kube-scheduler [7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8] ...
	I0731 21:59:41.286978 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a88cae49797bc98630465226d3e739bf587b0fe95e14b7921c20bb3421e1bf8"
	I0731 21:59:41.330478 1180289 logs.go:123] Gathering logs for kube-proxy [d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18] ...
	I0731 21:59:41.330517 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4f857d523e5d25aaa8e4f4051b5d616e00c073c7f85a405d950f494d908ca18"
	I0731 21:59:41.366833 1180289 logs.go:123] Gathering logs for CRI-O ...
	I0731 21:59:41.366874 1180289 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-801478 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 node stop m02 -v=7 --alsologtostderr
E0731 22:45:34.683143 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:46:15.643602 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.497873098s)

                                                
                                                
-- stdout --
	* Stopping node "ha-150891-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:45:25.887026 1198424 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:45:25.887143 1198424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:45:25.887147 1198424 out.go:304] Setting ErrFile to fd 2...
	I0731 22:45:25.887152 1198424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:45:25.887428 1198424 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:45:25.887749 1198424 mustload.go:65] Loading cluster: ha-150891
	I0731 22:45:25.888181 1198424 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:45:25.888208 1198424 stop.go:39] StopHost: ha-150891-m02
	I0731 22:45:25.888694 1198424 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:45:25.888740 1198424 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:45:25.904923 1198424 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39295
	I0731 22:45:25.905505 1198424 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:45:25.906310 1198424 main.go:141] libmachine: Using API Version  1
	I0731 22:45:25.906351 1198424 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:45:25.906794 1198424 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:45:25.909261 1198424 out.go:177] * Stopping node "ha-150891-m02"  ...
	I0731 22:45:25.910504 1198424 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 22:45:25.910565 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:45:25.910896 1198424 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 22:45:25.910930 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:45:25.913987 1198424 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:45:25.914352 1198424 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:45:25.914387 1198424 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:45:25.914541 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:45:25.914751 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:45:25.914962 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:45:25.915132 1198424 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:45:26.004207 1198424 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 22:45:26.058206 1198424 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 22:45:26.115352 1198424 main.go:141] libmachine: Stopping "ha-150891-m02"...
	I0731 22:45:26.115393 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:45:26.117146 1198424 main.go:141] libmachine: (ha-150891-m02) Calling .Stop
	I0731 22:45:26.121064 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 0/120
	I0731 22:45:27.122634 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 1/120
	I0731 22:45:28.123954 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 2/120
	I0731 22:45:29.125586 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 3/120
	I0731 22:45:30.126934 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 4/120
	I0731 22:45:31.128917 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 5/120
	I0731 22:45:32.131255 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 6/120
	I0731 22:45:33.132773 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 7/120
	I0731 22:45:34.134277 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 8/120
	I0731 22:45:35.135728 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 9/120
	I0731 22:45:36.138091 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 10/120
	I0731 22:45:37.139967 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 11/120
	I0731 22:45:38.141455 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 12/120
	I0731 22:45:39.143297 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 13/120
	I0731 22:45:40.145273 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 14/120
	I0731 22:45:41.147487 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 15/120
	I0731 22:45:42.148886 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 16/120
	I0731 22:45:43.150669 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 17/120
	I0731 22:45:44.152262 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 18/120
	I0731 22:45:45.153758 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 19/120
	I0731 22:45:46.155790 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 20/120
	I0731 22:45:47.157231 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 21/120
	I0731 22:45:48.158725 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 22/120
	I0731 22:45:49.160012 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 23/120
	I0731 22:45:50.161447 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 24/120
	I0731 22:45:51.163681 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 25/120
	I0731 22:45:52.165294 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 26/120
	I0731 22:45:53.167089 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 27/120
	I0731 22:45:54.168670 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 28/120
	I0731 22:45:55.169963 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 29/120
	I0731 22:45:56.172054 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 30/120
	I0731 22:45:57.174523 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 31/120
	I0731 22:45:58.175934 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 32/120
	I0731 22:45:59.177522 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 33/120
	I0731 22:46:00.179443 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 34/120
	I0731 22:46:01.181490 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 35/120
	I0731 22:46:02.183081 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 36/120
	I0731 22:46:03.184426 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 37/120
	I0731 22:46:04.186580 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 38/120
	I0731 22:46:05.188367 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 39/120
	I0731 22:46:06.190368 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 40/120
	I0731 22:46:07.191986 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 41/120
	I0731 22:46:08.193430 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 42/120
	I0731 22:46:09.195023 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 43/120
	I0731 22:46:10.196295 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 44/120
	I0731 22:46:11.198653 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 45/120
	I0731 22:46:12.200510 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 46/120
	I0731 22:46:13.201875 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 47/120
	I0731 22:46:14.203393 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 48/120
	I0731 22:46:15.204887 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 49/120
	I0731 22:46:16.206708 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 50/120
	I0731 22:46:17.208148 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 51/120
	I0731 22:46:18.209925 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 52/120
	I0731 22:46:19.211389 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 53/120
	I0731 22:46:20.212821 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 54/120
	I0731 22:46:21.215025 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 55/120
	I0731 22:46:22.216268 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 56/120
	I0731 22:46:23.217707 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 57/120
	I0731 22:46:24.219605 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 58/120
	I0731 22:46:25.221423 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 59/120
	I0731 22:46:26.223971 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 60/120
	I0731 22:46:27.225676 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 61/120
	I0731 22:46:28.227107 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 62/120
	I0731 22:46:29.228802 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 63/120
	I0731 22:46:30.230127 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 64/120
	I0731 22:46:31.231763 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 65/120
	I0731 22:46:32.233244 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 66/120
	I0731 22:46:33.234751 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 67/120
	I0731 22:46:34.236235 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 68/120
	I0731 22:46:35.237661 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 69/120
	I0731 22:46:36.239940 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 70/120
	I0731 22:46:37.242051 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 71/120
	I0731 22:46:38.243799 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 72/120
	I0731 22:46:39.245756 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 73/120
	I0731 22:46:40.247868 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 74/120
	I0731 22:46:41.249914 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 75/120
	I0731 22:46:42.251567 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 76/120
	I0731 22:46:43.253222 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 77/120
	I0731 22:46:44.254529 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 78/120
	I0731 22:46:45.256078 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 79/120
	I0731 22:46:46.258014 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 80/120
	I0731 22:46:47.259663 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 81/120
	I0731 22:46:48.262142 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 82/120
	I0731 22:46:49.264547 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 83/120
	I0731 22:46:50.266900 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 84/120
	I0731 22:46:51.269105 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 85/120
	I0731 22:46:52.270846 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 86/120
	I0731 22:46:53.272442 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 87/120
	I0731 22:46:54.274007 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 88/120
	I0731 22:46:55.275558 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 89/120
	I0731 22:46:56.277903 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 90/120
	I0731 22:46:57.279267 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 91/120
	I0731 22:46:58.281626 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 92/120
	I0731 22:46:59.283092 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 93/120
	I0731 22:47:00.284715 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 94/120
	I0731 22:47:01.286679 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 95/120
	I0731 22:47:02.288234 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 96/120
	I0731 22:47:03.289864 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 97/120
	I0731 22:47:04.291452 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 98/120
	I0731 22:47:05.293027 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 99/120
	I0731 22:47:06.295474 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 100/120
	I0731 22:47:07.297222 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 101/120
	I0731 22:47:08.298701 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 102/120
	I0731 22:47:09.300760 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 103/120
	I0731 22:47:10.302239 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 104/120
	I0731 22:47:11.304528 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 105/120
	I0731 22:47:12.306922 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 106/120
	I0731 22:47:13.308535 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 107/120
	I0731 22:47:14.311000 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 108/120
	I0731 22:47:15.312625 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 109/120
	I0731 22:47:16.314748 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 110/120
	I0731 22:47:17.316396 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 111/120
	I0731 22:47:18.318686 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 112/120
	I0731 22:47:19.320936 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 113/120
	I0731 22:47:20.322254 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 114/120
	I0731 22:47:21.324523 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 115/120
	I0731 22:47:22.325969 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 116/120
	I0731 22:47:23.328387 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 117/120
	I0731 22:47:24.330685 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 118/120
	I0731 22:47:25.332350 1198424 main.go:141] libmachine: (ha-150891-m02) Waiting for machine to stop 119/120
	I0731 22:47:26.333940 1198424 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 22:47:26.334092 1198424 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-150891 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
E0731 22:47:37.565899 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (19.116928196s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:47:26.383767 1198850 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:47:26.384084 1198850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:26.384113 1198850 out.go:304] Setting ErrFile to fd 2...
	I0731 22:47:26.384120 1198850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:26.384356 1198850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:47:26.384628 1198850 out.go:298] Setting JSON to false
	I0731 22:47:26.384656 1198850 mustload.go:65] Loading cluster: ha-150891
	I0731 22:47:26.384758 1198850 notify.go:220] Checking for updates...
	I0731 22:47:26.385168 1198850 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:47:26.385187 1198850 status.go:255] checking status of ha-150891 ...
	I0731 22:47:26.385630 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:26.385705 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:26.404484 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37409
	I0731 22:47:26.405103 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:26.405754 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:26.405777 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:26.406157 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:26.406346 1198850 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:47:26.407945 1198850 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:47:26.407966 1198850 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:47:26.408295 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:26.408343 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:26.423727 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0731 22:47:26.424231 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:26.424928 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:26.424965 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:26.425319 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:26.425525 1198850 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:47:26.428696 1198850 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:26.429233 1198850 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:47:26.429284 1198850 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:26.429393 1198850 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:47:26.429928 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:26.429984 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:26.447071 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42779
	I0731 22:47:26.447641 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:26.448174 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:26.448201 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:26.448540 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:26.448745 1198850 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:47:26.448967 1198850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:26.449009 1198850 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:47:26.452237 1198850 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:26.452716 1198850 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:47:26.452757 1198850 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:26.452885 1198850 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:47:26.453096 1198850 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:47:26.453269 1198850 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:47:26.453432 1198850 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:47:26.539256 1198850 ssh_runner.go:195] Run: systemctl --version
	I0731 22:47:26.545596 1198850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:26.560503 1198850 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:47:26.560539 1198850 api_server.go:166] Checking apiserver status ...
	I0731 22:47:26.560578 1198850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:47:26.574163 1198850 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:47:26.584129 1198850 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:47:26.584187 1198850 ssh_runner.go:195] Run: ls
	I0731 22:47:26.588338 1198850 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:47:26.592513 1198850 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:47:26.592540 1198850 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:47:26.592550 1198850 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:47:26.592568 1198850 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:47:26.592916 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:26.592949 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:26.608770 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0731 22:47:26.609290 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:26.609763 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:26.609777 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:26.610120 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:26.610319 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:47:26.612016 1198850 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:47:26.612035 1198850 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:47:26.612358 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:26.612390 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:26.628124 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38641
	I0731 22:47:26.628730 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:26.629306 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:26.629330 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:26.629662 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:26.629829 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:47:26.632756 1198850 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:26.633206 1198850 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:47:26.633229 1198850 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:26.633419 1198850 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:47:26.633768 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:26.633810 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:26.649167 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0731 22:47:26.649726 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:26.650233 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:26.650257 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:26.650573 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:26.650819 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:47:26.651019 1198850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:26.651039 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:47:26.654109 1198850 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:26.654594 1198850 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:47:26.654629 1198850 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:26.654765 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:47:26.654955 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:47:26.655133 1198850 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:47:26.655263 1198850 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:47:45.088317 1198850 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:47:45.088486 1198850 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:47:45.088509 1198850 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:47:45.088519 1198850 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:47:45.088548 1198850 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:47:45.088559 1198850 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:47:45.088982 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:45.089048 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:45.104917 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0731 22:47:45.105322 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:45.105849 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:45.105873 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:45.106177 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:45.106378 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:47:45.108026 1198850 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:47:45.108044 1198850 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:47:45.108361 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:45.108400 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:45.124971 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45487
	I0731 22:47:45.125437 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:45.125940 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:45.125958 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:45.126223 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:45.126444 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:47:45.129484 1198850 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:45.129901 1198850 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:47:45.129936 1198850 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:45.130068 1198850 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:47:45.130374 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:45.130410 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:45.145862 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0731 22:47:45.146337 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:45.146845 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:45.146868 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:45.147210 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:45.147396 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:47:45.147617 1198850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:45.147638 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:47:45.150806 1198850 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:45.151267 1198850 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:47:45.151301 1198850 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:45.151485 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:47:45.151669 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:47:45.151857 1198850 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:47:45.152026 1198850 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:47:45.233756 1198850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:45.250761 1198850 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:47:45.250794 1198850 api_server.go:166] Checking apiserver status ...
	I0731 22:47:45.250826 1198850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:47:45.267652 1198850 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:47:45.278129 1198850 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:47:45.278188 1198850 ssh_runner.go:195] Run: ls
	I0731 22:47:45.282455 1198850 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:47:45.287173 1198850 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:47:45.287203 1198850 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:47:45.287211 1198850 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:47:45.287230 1198850 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:47:45.287532 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:45.287556 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:45.305458 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46511
	I0731 22:47:45.306021 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:45.306586 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:45.306627 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:45.307033 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:45.307237 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:47:45.308980 1198850 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:47:45.309000 1198850 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:47:45.309330 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:45.309362 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:45.325356 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35379
	I0731 22:47:45.325864 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:45.326401 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:45.326426 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:45.326752 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:45.326991 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:47:45.330236 1198850 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:45.330626 1198850 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:47:45.330646 1198850 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:45.330887 1198850 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:47:45.331281 1198850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:45.331330 1198850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:45.347130 1198850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45141
	I0731 22:47:45.347647 1198850 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:45.348176 1198850 main.go:141] libmachine: Using API Version  1
	I0731 22:47:45.348199 1198850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:45.348486 1198850 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:45.348655 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:47:45.348842 1198850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:45.348865 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:47:45.351684 1198850 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:45.352154 1198850 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:47:45.352176 1198850 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:45.352314 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:47:45.352496 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:47:45.352682 1198850 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:47:45.352832 1198850 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:47:45.435546 1198850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:45.450556 1198850 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-150891 -n ha-150891
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-150891 logs -n 25: (1.312781025s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891:/home/docker/cp-test_ha-150891-m03_ha-150891.txt                       |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891 sudo cat                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891.txt                                 |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m04 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp testdata/cp-test.txt                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891:/home/docker/cp-test_ha-150891-m04_ha-150891.txt                       |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891 sudo cat                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891.txt                                 |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03:/home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m03 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-150891 node stop m02 -v=7                                                     | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:40:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:40:40.501333 1194386 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:40:40.501605 1194386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:40.501613 1194386 out.go:304] Setting ErrFile to fd 2...
	I0731 22:40:40.501617 1194386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:40.501819 1194386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:40:40.502468 1194386 out.go:298] Setting JSON to false
	I0731 22:40:40.503429 1194386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":22991,"bootTime":1722442649,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 22:40:40.503497 1194386 start.go:139] virtualization: kvm guest
	I0731 22:40:40.505751 1194386 out.go:177] * [ha-150891] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 22:40:40.507210 1194386 notify.go:220] Checking for updates...
	I0731 22:40:40.507218 1194386 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:40:40.508910 1194386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:40:40.510277 1194386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:40:40.511652 1194386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:40.512941 1194386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 22:40:40.514171 1194386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:40:40.515483 1194386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:40:40.553750 1194386 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 22:40:40.554943 1194386 start.go:297] selected driver: kvm2
	I0731 22:40:40.554960 1194386 start.go:901] validating driver "kvm2" against <nil>
	I0731 22:40:40.554999 1194386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:40:40.555780 1194386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:40:40.555881 1194386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 22:40:40.571732 1194386 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 22:40:40.571800 1194386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:40:40.572052 1194386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:40:40.572145 1194386 cni.go:84] Creating CNI manager for ""
	I0731 22:40:40.572161 1194386 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 22:40:40.572169 1194386 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:40:40.572225 1194386 start.go:340] cluster config:
	{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 22:40:40.572324 1194386 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:40:40.574153 1194386 out.go:177] * Starting "ha-150891" primary control-plane node in "ha-150891" cluster
	I0731 22:40:40.575282 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:40:40.575322 1194386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 22:40:40.575333 1194386 cache.go:56] Caching tarball of preloaded images
	I0731 22:40:40.575419 1194386 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:40:40.575430 1194386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:40:40.575725 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:40:40.575747 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json: {Name:mk9638a254245e6b064f22970f1f8c3c5e0311aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:40:40.575883 1194386 start.go:360] acquireMachinesLock for ha-150891: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:40:40.575912 1194386 start.go:364] duration metric: took 15.828µs to acquireMachinesLock for "ha-150891"
	I0731 22:40:40.575929 1194386 start.go:93] Provisioning new machine with config: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:40:40.575992 1194386 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 22:40:40.578292 1194386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:40:40.578436 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:40:40.578493 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:40:40.594322 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0731 22:40:40.594834 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:40:40.595361 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:40:40.595386 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:40:40.595699 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:40:40.595878 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:40:40.596066 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:40:40.596248 1194386 start.go:159] libmachine.API.Create for "ha-150891" (driver="kvm2")
	I0731 22:40:40.596282 1194386 client.go:168] LocalClient.Create starting
	I0731 22:40:40.596314 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 22:40:40.596345 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:40:40.596358 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:40:40.596402 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 22:40:40.596420 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:40:40.596431 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:40:40.596446 1194386 main.go:141] libmachine: Running pre-create checks...
	I0731 22:40:40.596455 1194386 main.go:141] libmachine: (ha-150891) Calling .PreCreateCheck
	I0731 22:40:40.596780 1194386 main.go:141] libmachine: (ha-150891) Calling .GetConfigRaw
	I0731 22:40:40.597147 1194386 main.go:141] libmachine: Creating machine...
	I0731 22:40:40.597160 1194386 main.go:141] libmachine: (ha-150891) Calling .Create
	I0731 22:40:40.597284 1194386 main.go:141] libmachine: (ha-150891) Creating KVM machine...
	I0731 22:40:40.598730 1194386 main.go:141] libmachine: (ha-150891) DBG | found existing default KVM network
	I0731 22:40:40.599631 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:40.599448 1194409 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f350}
	I0731 22:40:40.599656 1194386 main.go:141] libmachine: (ha-150891) DBG | created network xml: 
	I0731 22:40:40.599671 1194386 main.go:141] libmachine: (ha-150891) DBG | <network>
	I0731 22:40:40.599683 1194386 main.go:141] libmachine: (ha-150891) DBG |   <name>mk-ha-150891</name>
	I0731 22:40:40.599691 1194386 main.go:141] libmachine: (ha-150891) DBG |   <dns enable='no'/>
	I0731 22:40:40.599702 1194386 main.go:141] libmachine: (ha-150891) DBG |   
	I0731 22:40:40.599718 1194386 main.go:141] libmachine: (ha-150891) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 22:40:40.599727 1194386 main.go:141] libmachine: (ha-150891) DBG |     <dhcp>
	I0731 22:40:40.599740 1194386 main.go:141] libmachine: (ha-150891) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 22:40:40.599754 1194386 main.go:141] libmachine: (ha-150891) DBG |     </dhcp>
	I0731 22:40:40.599767 1194386 main.go:141] libmachine: (ha-150891) DBG |   </ip>
	I0731 22:40:40.599777 1194386 main.go:141] libmachine: (ha-150891) DBG |   
	I0731 22:40:40.599784 1194386 main.go:141] libmachine: (ha-150891) DBG | </network>
	I0731 22:40:40.599793 1194386 main.go:141] libmachine: (ha-150891) DBG | 
	I0731 22:40:40.604945 1194386 main.go:141] libmachine: (ha-150891) DBG | trying to create private KVM network mk-ha-150891 192.168.39.0/24...
	I0731 22:40:40.675326 1194386 main.go:141] libmachine: (ha-150891) DBG | private KVM network mk-ha-150891 192.168.39.0/24 created
	I0731 22:40:40.675369 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:40.675244 1194409 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:40.675384 1194386 main.go:141] libmachine: (ha-150891) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891 ...
	I0731 22:40:40.675405 1194386 main.go:141] libmachine: (ha-150891) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 22:40:40.675422 1194386 main.go:141] libmachine: (ha-150891) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:40:40.957270 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:40.957094 1194409 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa...
	I0731 22:40:41.156324 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:41.156160 1194409 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/ha-150891.rawdisk...
	I0731 22:40:41.156354 1194386 main.go:141] libmachine: (ha-150891) DBG | Writing magic tar header
	I0731 22:40:41.156365 1194386 main.go:141] libmachine: (ha-150891) DBG | Writing SSH key tar header
	I0731 22:40:41.156373 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:41.156286 1194409 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891 ...
	I0731 22:40:41.156388 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891
	I0731 22:40:41.156487 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 22:40:41.156512 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:41.156521 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891 (perms=drwx------)
	I0731 22:40:41.156533 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 22:40:41.156539 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 22:40:41.156549 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 22:40:41.156558 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 22:40:41.156567 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 22:40:41.156586 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 22:40:41.156598 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 22:40:41.156603 1194386 main.go:141] libmachine: (ha-150891) Creating domain...
	I0731 22:40:41.156636 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins
	I0731 22:40:41.156661 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home
	I0731 22:40:41.156675 1194386 main.go:141] libmachine: (ha-150891) DBG | Skipping /home - not owner
	I0731 22:40:41.157818 1194386 main.go:141] libmachine: (ha-150891) define libvirt domain using xml: 
	I0731 22:40:41.157837 1194386 main.go:141] libmachine: (ha-150891) <domain type='kvm'>
	I0731 22:40:41.157843 1194386 main.go:141] libmachine: (ha-150891)   <name>ha-150891</name>
	I0731 22:40:41.157848 1194386 main.go:141] libmachine: (ha-150891)   <memory unit='MiB'>2200</memory>
	I0731 22:40:41.157856 1194386 main.go:141] libmachine: (ha-150891)   <vcpu>2</vcpu>
	I0731 22:40:41.157864 1194386 main.go:141] libmachine: (ha-150891)   <features>
	I0731 22:40:41.157894 1194386 main.go:141] libmachine: (ha-150891)     <acpi/>
	I0731 22:40:41.157922 1194386 main.go:141] libmachine: (ha-150891)     <apic/>
	I0731 22:40:41.157946 1194386 main.go:141] libmachine: (ha-150891)     <pae/>
	I0731 22:40:41.157977 1194386 main.go:141] libmachine: (ha-150891)     
	I0731 22:40:41.157990 1194386 main.go:141] libmachine: (ha-150891)   </features>
	I0731 22:40:41.158000 1194386 main.go:141] libmachine: (ha-150891)   <cpu mode='host-passthrough'>
	I0731 22:40:41.158011 1194386 main.go:141] libmachine: (ha-150891)   
	I0731 22:40:41.158020 1194386 main.go:141] libmachine: (ha-150891)   </cpu>
	I0731 22:40:41.158030 1194386 main.go:141] libmachine: (ha-150891)   <os>
	I0731 22:40:41.158039 1194386 main.go:141] libmachine: (ha-150891)     <type>hvm</type>
	I0731 22:40:41.158050 1194386 main.go:141] libmachine: (ha-150891)     <boot dev='cdrom'/>
	I0731 22:40:41.158063 1194386 main.go:141] libmachine: (ha-150891)     <boot dev='hd'/>
	I0731 22:40:41.158073 1194386 main.go:141] libmachine: (ha-150891)     <bootmenu enable='no'/>
	I0731 22:40:41.158082 1194386 main.go:141] libmachine: (ha-150891)   </os>
	I0731 22:40:41.158091 1194386 main.go:141] libmachine: (ha-150891)   <devices>
	I0731 22:40:41.158104 1194386 main.go:141] libmachine: (ha-150891)     <disk type='file' device='cdrom'>
	I0731 22:40:41.158113 1194386 main.go:141] libmachine: (ha-150891)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/boot2docker.iso'/>
	I0731 22:40:41.158121 1194386 main.go:141] libmachine: (ha-150891)       <target dev='hdc' bus='scsi'/>
	I0731 22:40:41.158126 1194386 main.go:141] libmachine: (ha-150891)       <readonly/>
	I0731 22:40:41.158134 1194386 main.go:141] libmachine: (ha-150891)     </disk>
	I0731 22:40:41.158144 1194386 main.go:141] libmachine: (ha-150891)     <disk type='file' device='disk'>
	I0731 22:40:41.158170 1194386 main.go:141] libmachine: (ha-150891)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 22:40:41.158194 1194386 main.go:141] libmachine: (ha-150891)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/ha-150891.rawdisk'/>
	I0731 22:40:41.158208 1194386 main.go:141] libmachine: (ha-150891)       <target dev='hda' bus='virtio'/>
	I0731 22:40:41.158218 1194386 main.go:141] libmachine: (ha-150891)     </disk>
	I0731 22:40:41.158230 1194386 main.go:141] libmachine: (ha-150891)     <interface type='network'>
	I0731 22:40:41.158242 1194386 main.go:141] libmachine: (ha-150891)       <source network='mk-ha-150891'/>
	I0731 22:40:41.158260 1194386 main.go:141] libmachine: (ha-150891)       <model type='virtio'/>
	I0731 22:40:41.158277 1194386 main.go:141] libmachine: (ha-150891)     </interface>
	I0731 22:40:41.158295 1194386 main.go:141] libmachine: (ha-150891)     <interface type='network'>
	I0731 22:40:41.158312 1194386 main.go:141] libmachine: (ha-150891)       <source network='default'/>
	I0731 22:40:41.158323 1194386 main.go:141] libmachine: (ha-150891)       <model type='virtio'/>
	I0731 22:40:41.158333 1194386 main.go:141] libmachine: (ha-150891)     </interface>
	I0731 22:40:41.158344 1194386 main.go:141] libmachine: (ha-150891)     <serial type='pty'>
	I0731 22:40:41.158352 1194386 main.go:141] libmachine: (ha-150891)       <target port='0'/>
	I0731 22:40:41.158357 1194386 main.go:141] libmachine: (ha-150891)     </serial>
	I0731 22:40:41.158364 1194386 main.go:141] libmachine: (ha-150891)     <console type='pty'>
	I0731 22:40:41.158370 1194386 main.go:141] libmachine: (ha-150891)       <target type='serial' port='0'/>
	I0731 22:40:41.158377 1194386 main.go:141] libmachine: (ha-150891)     </console>
	I0731 22:40:41.158382 1194386 main.go:141] libmachine: (ha-150891)     <rng model='virtio'>
	I0731 22:40:41.158393 1194386 main.go:141] libmachine: (ha-150891)       <backend model='random'>/dev/random</backend>
	I0731 22:40:41.158409 1194386 main.go:141] libmachine: (ha-150891)     </rng>
	I0731 22:40:41.158425 1194386 main.go:141] libmachine: (ha-150891)     
	I0731 22:40:41.158437 1194386 main.go:141] libmachine: (ha-150891)     
	I0731 22:40:41.158446 1194386 main.go:141] libmachine: (ha-150891)   </devices>
	I0731 22:40:41.158457 1194386 main.go:141] libmachine: (ha-150891) </domain>
	I0731 22:40:41.158465 1194386 main.go:141] libmachine: (ha-150891) 
	I0731 22:40:41.162729 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:b7:c5:0e in network default
	I0731 22:40:41.163316 1194386 main.go:141] libmachine: (ha-150891) Ensuring networks are active...
	I0731 22:40:41.163335 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:41.163965 1194386 main.go:141] libmachine: (ha-150891) Ensuring network default is active
	I0731 22:40:41.164277 1194386 main.go:141] libmachine: (ha-150891) Ensuring network mk-ha-150891 is active
	I0731 22:40:41.164795 1194386 main.go:141] libmachine: (ha-150891) Getting domain xml...
	I0731 22:40:41.165491 1194386 main.go:141] libmachine: (ha-150891) Creating domain...
	I0731 22:40:42.383940 1194386 main.go:141] libmachine: (ha-150891) Waiting to get IP...
	I0731 22:40:42.384727 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:42.385076 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:42.385112 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:42.385050 1194409 retry.go:31] will retry after 303.270484ms: waiting for machine to come up
	I0731 22:40:42.690183 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:42.690649 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:42.690673 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:42.690608 1194409 retry.go:31] will retry after 318.522166ms: waiting for machine to come up
	I0731 22:40:43.011209 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:43.011564 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:43.011603 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:43.011546 1194409 retry.go:31] will retry after 482.718271ms: waiting for machine to come up
	I0731 22:40:43.496168 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:43.496531 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:43.496561 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:43.496474 1194409 retry.go:31] will retry after 430.6903ms: waiting for machine to come up
	I0731 22:40:43.929145 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:43.929597 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:43.929618 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:43.929547 1194409 retry.go:31] will retry after 659.092465ms: waiting for machine to come up
	I0731 22:40:44.590408 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:44.590821 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:44.590849 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:44.590777 1194409 retry.go:31] will retry after 721.169005ms: waiting for machine to come up
	I0731 22:40:45.313753 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:45.314240 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:45.314271 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:45.314183 1194409 retry.go:31] will retry after 721.182405ms: waiting for machine to come up
	I0731 22:40:46.036604 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:46.037080 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:46.037108 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:46.037030 1194409 retry.go:31] will retry after 950.144159ms: waiting for machine to come up
	I0731 22:40:46.989140 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:46.989471 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:46.989495 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:46.989422 1194409 retry.go:31] will retry after 1.605315848s: waiting for machine to come up
	I0731 22:40:48.597253 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:48.597680 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:48.597714 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:48.597629 1194409 retry.go:31] will retry after 1.497155047s: waiting for machine to come up
	I0731 22:40:50.097369 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:50.097837 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:50.097894 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:50.097827 1194409 retry.go:31] will retry after 1.906642059s: waiting for machine to come up
	I0731 22:40:52.006830 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:52.007200 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:52.007231 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:52.007157 1194409 retry.go:31] will retry after 3.526118614s: waiting for machine to come up
	I0731 22:40:55.537756 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:55.538179 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:55.538203 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:55.538137 1194409 retry.go:31] will retry after 3.929909401s: waiting for machine to come up
	I0731 22:40:59.469246 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:59.469664 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:59.469685 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:59.469620 1194409 retry.go:31] will retry after 4.739931386s: waiting for machine to come up
	I0731 22:41:04.213465 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.213947 1194386 main.go:141] libmachine: (ha-150891) Found IP for machine: 192.168.39.105
	I0731 22:41:04.213969 1194386 main.go:141] libmachine: (ha-150891) Reserving static IP address...
	I0731 22:41:04.213988 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has current primary IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.214287 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find host DHCP lease matching {name: "ha-150891", mac: "52:54:00:5d:5d:f5", ip: "192.168.39.105"} in network mk-ha-150891
	I0731 22:41:04.296679 1194386 main.go:141] libmachine: (ha-150891) DBG | Getting to WaitForSSH function...
	I0731 22:41:04.296713 1194386 main.go:141] libmachine: (ha-150891) Reserved static IP address: 192.168.39.105
	I0731 22:41:04.296727 1194386 main.go:141] libmachine: (ha-150891) Waiting for SSH to be available...
	I0731 22:41:04.299421 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.299881 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.299928 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.300075 1194386 main.go:141] libmachine: (ha-150891) DBG | Using SSH client type: external
	I0731 22:41:04.300113 1194386 main.go:141] libmachine: (ha-150891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa (-rw-------)
	I0731 22:41:04.300146 1194386 main.go:141] libmachine: (ha-150891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 22:41:04.300160 1194386 main.go:141] libmachine: (ha-150891) DBG | About to run SSH command:
	I0731 22:41:04.300173 1194386 main.go:141] libmachine: (ha-150891) DBG | exit 0
	I0731 22:41:04.427992 1194386 main.go:141] libmachine: (ha-150891) DBG | SSH cmd err, output: <nil>: 
	I0731 22:41:04.428256 1194386 main.go:141] libmachine: (ha-150891) KVM machine creation complete!
	I0731 22:41:04.428576 1194386 main.go:141] libmachine: (ha-150891) Calling .GetConfigRaw
	I0731 22:41:04.429106 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:04.429317 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:04.429459 1194386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 22:41:04.429475 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:04.430805 1194386 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 22:41:04.430829 1194386 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 22:41:04.430836 1194386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 22:41:04.430845 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.433301 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.433677 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.433694 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.433869 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.434068 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.434240 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.434401 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.434559 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.434796 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.434811 1194386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 22:41:04.543286 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:41:04.543317 1194386 main.go:141] libmachine: Detecting the provisioner...
	I0731 22:41:04.543326 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.546258 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.546597 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.546629 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.546765 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.546976 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.547150 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.547289 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.547442 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.547635 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.547648 1194386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 22:41:04.656499 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 22:41:04.656634 1194386 main.go:141] libmachine: found compatible host: buildroot
	I0731 22:41:04.656650 1194386 main.go:141] libmachine: Provisioning with buildroot...
	I0731 22:41:04.656665 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:41:04.656948 1194386 buildroot.go:166] provisioning hostname "ha-150891"
	I0731 22:41:04.656979 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:41:04.657174 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.659719 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.660076 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.660120 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.660289 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.660494 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.660667 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.660801 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.660968 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.661150 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.661164 1194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891 && echo "ha-150891" | sudo tee /etc/hostname
	I0731 22:41:04.784816 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891
	
	I0731 22:41:04.784860 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.787627 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.788011 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.788044 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.788224 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.788425 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.788568 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.788752 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.788919 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.789126 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.789146 1194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:41:04.908378 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:41:04.908418 1194386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:41:04.908449 1194386 buildroot.go:174] setting up certificates
	I0731 22:41:04.908465 1194386 provision.go:84] configureAuth start
	I0731 22:41:04.908480 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:41:04.908761 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:04.911505 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.911830 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.911848 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.912008 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.913965 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.914247 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.914274 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.914419 1194386 provision.go:143] copyHostCerts
	I0731 22:41:04.914453 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:41:04.914486 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:41:04.914495 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:41:04.914560 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:41:04.914640 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:41:04.914657 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:41:04.914663 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:41:04.914688 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:41:04.914731 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:41:04.914747 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:41:04.914753 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:41:04.914773 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:41:04.914833 1194386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891 san=[127.0.0.1 192.168.39.105 ha-150891 localhost minikube]
	I0731 22:41:05.110288 1194386 provision.go:177] copyRemoteCerts
	I0731 22:41:05.110350 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:41:05.110378 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.112979 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.113348 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.113379 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.113551 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.113746 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.113889 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.114015 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.197429 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:41:05.197521 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:41:05.221033 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:41:05.221124 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 22:41:05.249459 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:41:05.249538 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:41:05.272106 1194386 provision.go:87] duration metric: took 363.612751ms to configureAuth
	I0731 22:41:05.272136 1194386 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:41:05.272326 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:41:05.272419 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.275035 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.275336 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.275360 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.275541 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.275728 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.275885 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.276008 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.276163 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:05.276381 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:05.276402 1194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:41:05.545956 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:41:05.545991 1194386 main.go:141] libmachine: Checking connection to Docker...
	I0731 22:41:05.546000 1194386 main.go:141] libmachine: (ha-150891) Calling .GetURL
	I0731 22:41:05.547315 1194386 main.go:141] libmachine: (ha-150891) DBG | Using libvirt version 6000000
	I0731 22:41:05.549542 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.549911 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.549938 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.550147 1194386 main.go:141] libmachine: Docker is up and running!
	I0731 22:41:05.550165 1194386 main.go:141] libmachine: Reticulating splines...
	I0731 22:41:05.550172 1194386 client.go:171] duration metric: took 24.953879283s to LocalClient.Create
	I0731 22:41:05.550202 1194386 start.go:167] duration metric: took 24.953948776s to libmachine.API.Create "ha-150891"
	I0731 22:41:05.550215 1194386 start.go:293] postStartSetup for "ha-150891" (driver="kvm2")
	I0731 22:41:05.550228 1194386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:41:05.550253 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.550518 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:41:05.550546 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.552887 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.553264 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.553293 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.553427 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.553646 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.553821 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.553927 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.638306 1194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:41:05.642316 1194386 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:41:05.642356 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:41:05.642476 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:41:05.642578 1194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:41:05.642592 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:41:05.642713 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:41:05.652005 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:41:05.674443 1194386 start.go:296] duration metric: took 124.211165ms for postStartSetup
	I0731 22:41:05.674517 1194386 main.go:141] libmachine: (ha-150891) Calling .GetConfigRaw
	I0731 22:41:05.675191 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:05.677842 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.678312 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.678341 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.678593 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:41:05.678780 1194386 start.go:128] duration metric: took 25.102776872s to createHost
	I0731 22:41:05.678802 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.681108 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.681384 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.681417 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.681567 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.681768 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.681945 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.682076 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.682248 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:05.682469 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:05.682488 1194386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:41:05.792420 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465665.770360467
	
	I0731 22:41:05.792448 1194386 fix.go:216] guest clock: 1722465665.770360467
	I0731 22:41:05.792459 1194386 fix.go:229] Guest: 2024-07-31 22:41:05.770360467 +0000 UTC Remote: 2024-07-31 22:41:05.678790863 +0000 UTC m=+25.213575611 (delta=91.569604ms)
	I0731 22:41:05.792518 1194386 fix.go:200] guest clock delta is within tolerance: 91.569604ms
	I0731 22:41:05.792524 1194386 start.go:83] releasing machines lock for "ha-150891", held for 25.216603122s
	I0731 22:41:05.792556 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.792900 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:05.795610 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.795928 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.795974 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.796125 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.796595 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.796792 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.796889 1194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:41:05.796934 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.797041 1194386 ssh_runner.go:195] Run: cat /version.json
	I0731 22:41:05.797065 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.799703 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800032 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.800061 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800082 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800188 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.800404 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.800495 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.800514 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800571 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.800664 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.800790 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.800772 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.800930 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.801081 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.920240 1194386 ssh_runner.go:195] Run: systemctl --version
	I0731 22:41:05.925953 1194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:41:06.082497 1194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:41:06.087909 1194386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:41:06.087979 1194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:41:06.103788 1194386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:41:06.103818 1194386 start.go:495] detecting cgroup driver to use...
	I0731 22:41:06.103884 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:41:06.119532 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:41:06.133685 1194386 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:41:06.133744 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:41:06.147619 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:41:06.161135 1194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:41:06.282997 1194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:41:06.434011 1194386 docker.go:233] disabling docker service ...
	I0731 22:41:06.434099 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:41:06.448041 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:41:06.460849 1194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:41:06.592412 1194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:41:06.714931 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:41:06.729443 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:41:06.747342 1194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:41:06.747405 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.757370 1194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:41:06.757454 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.767795 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.777947 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.788189 1194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:41:06.798625 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.808841 1194386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.825259 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.835757 1194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:41:06.845132 1194386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 22:41:06.845200 1194386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 22:41:06.858527 1194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:41:06.868444 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:41:06.983481 1194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:41:07.126787 1194386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:41:07.126858 1194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:41:07.131508 1194386 start.go:563] Will wait 60s for crictl version
	I0731 22:41:07.131564 1194386 ssh_runner.go:195] Run: which crictl
	I0731 22:41:07.135221 1194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:41:07.171263 1194386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:41:07.171349 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:41:07.197291 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:41:07.225531 1194386 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:41:07.227103 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:07.229913 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:07.230265 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:07.230294 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:07.230510 1194386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:41:07.234402 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:41:07.246522 1194386 kubeadm.go:883] updating cluster {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:41:07.246680 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:41:07.246750 1194386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:41:07.277126 1194386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 22:41:07.277206 1194386 ssh_runner.go:195] Run: which lz4
	I0731 22:41:07.280976 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 22:41:07.281081 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 22:41:07.285018 1194386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 22:41:07.285055 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 22:41:08.624169 1194386 crio.go:462] duration metric: took 1.343113145s to copy over tarball
	I0731 22:41:08.624241 1194386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 22:41:10.788346 1194386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.164070863s)
	I0731 22:41:10.788383 1194386 crio.go:469] duration metric: took 2.164182212s to extract the tarball
	I0731 22:41:10.788394 1194386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 22:41:10.825709 1194386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:41:10.873399 1194386 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:41:10.873429 1194386 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:41:10.873440 1194386 kubeadm.go:934] updating node { 192.168.39.105 8443 v1.30.3 crio true true} ...
	I0731 22:41:10.873580 1194386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:41:10.873654 1194386 ssh_runner.go:195] Run: crio config
	I0731 22:41:10.916824 1194386 cni.go:84] Creating CNI manager for ""
	I0731 22:41:10.916846 1194386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:41:10.916858 1194386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:41:10.916881 1194386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-150891 NodeName:ha-150891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:41:10.917021 1194386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-150891"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:41:10.917046 1194386 kube-vip.go:115] generating kube-vip config ...
	I0731 22:41:10.917090 1194386 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:41:10.932857 1194386 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:41:10.932998 1194386 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:41:10.933078 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:41:10.942834 1194386 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:41:10.942932 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 22:41:10.952719 1194386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 22:41:10.969180 1194386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:41:10.985491 1194386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 22:41:11.001705 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 22:41:11.018193 1194386 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:41:11.021800 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:41:11.033871 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:41:11.158730 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:41:11.175706 1194386 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.105
	I0731 22:41:11.175736 1194386 certs.go:194] generating shared ca certs ...
	I0731 22:41:11.175758 1194386 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.175968 1194386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:41:11.176025 1194386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:41:11.176038 1194386 certs.go:256] generating profile certs ...
	I0731 22:41:11.176134 1194386 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:41:11.176155 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt with IP's: []
	I0731 22:41:11.342866 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt ...
	I0731 22:41:11.342898 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt: {Name:mka7ac5725d8bbe92340ca35d53fce869b691752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.343080 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key ...
	I0731 22:41:11.343092 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key: {Name:mk2dbd419cac26e8d9b1d180d735f6df2973a848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.343170 1194386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23
	I0731 22:41:11.343186 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.254]
	I0731 22:41:11.446273 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23 ...
	I0731 22:41:11.446307 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23: {Name:mk1d553d14c68d12e4fbac01a9a120a94f6e845a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.446479 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23 ...
	I0731 22:41:11.446494 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23: {Name:mkcc3095f5ddb4b2831a10534845e98d0392f0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.446572 1194386 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:41:11.446650 1194386 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:41:11.446709 1194386 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:41:11.446724 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt with IP's: []
	I0731 22:41:11.684370 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt ...
	I0731 22:41:11.684408 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt: {Name:mk9556239b50cd6cb62e7d5272ceeed0a2985331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.684590 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key ...
	I0731 22:41:11.684601 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key: {Name:mkb90591deb06e12c16008f6a11dd2ff071a9c50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.684673 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:41:11.684694 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:41:11.684708 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:41:11.684721 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:41:11.684739 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:41:11.684753 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:41:11.684765 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:41:11.684777 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:41:11.684833 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:41:11.684872 1194386 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:41:11.684879 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:41:11.684899 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:41:11.684921 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:41:11.684945 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:41:11.684996 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:41:11.685029 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:11.685044 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:41:11.685056 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:41:11.685576 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:41:11.710782 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:41:11.733663 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:41:11.756970 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:41:11.781366 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 22:41:11.805830 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:41:11.830355 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:41:11.856325 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:41:11.880167 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:41:11.903616 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:41:11.931832 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:41:11.958758 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:41:11.977199 1194386 ssh_runner.go:195] Run: openssl version
	I0731 22:41:11.983009 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:41:11.998583 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:12.003074 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:12.003136 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:12.008826 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:41:12.019438 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:41:12.029943 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:41:12.034176 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:41:12.034238 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:41:12.039750 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:41:12.050157 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:41:12.060658 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:41:12.065078 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:41:12.065153 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:41:12.070665 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:41:12.081211 1194386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:41:12.085259 1194386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:41:12.085320 1194386 kubeadm.go:392] StartCluster: {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:41:12.085423 1194386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 22:41:12.085475 1194386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 22:41:12.125444 1194386 cri.go:89] found id: ""
	I0731 22:41:12.125526 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 22:41:12.135046 1194386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 22:41:12.145651 1194386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 22:41:12.157866 1194386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 22:41:12.157887 1194386 kubeadm.go:157] found existing configuration files:
	
	I0731 22:41:12.157933 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 22:41:12.166742 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 22:41:12.166808 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 22:41:12.176351 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 22:41:12.185445 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 22:41:12.185530 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 22:41:12.194673 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 22:41:12.203308 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 22:41:12.203375 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 22:41:12.212579 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 22:41:12.221043 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 22:41:12.221110 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 22:41:12.230240 1194386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 22:41:12.337139 1194386 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 22:41:12.337231 1194386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 22:41:12.454022 1194386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 22:41:12.454122 1194386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 22:41:12.454202 1194386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 22:41:12.651958 1194386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 22:41:12.820071 1194386 out.go:204]   - Generating certificates and keys ...
	I0731 22:41:12.820207 1194386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 22:41:12.820294 1194386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 22:41:12.820392 1194386 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 22:41:13.110139 1194386 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 22:41:13.216541 1194386 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 22:41:13.411109 1194386 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 22:41:13.619081 1194386 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 22:41:13.619351 1194386 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-150891 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0731 22:41:13.808874 1194386 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 22:41:13.809040 1194386 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-150891 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0731 22:41:13.899652 1194386 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 22:41:14.212030 1194386 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 22:41:14.277510 1194386 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 22:41:14.277689 1194386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 22:41:14.357327 1194386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 22:41:14.457066 1194386 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 22:41:14.586947 1194386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 22:41:14.708144 1194386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 22:41:14.897018 1194386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 22:41:14.897969 1194386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 22:41:14.902912 1194386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 22:41:14.904887 1194386 out.go:204]   - Booting up control plane ...
	I0731 22:41:14.905023 1194386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 22:41:14.905165 1194386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 22:41:14.905736 1194386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 22:41:14.920678 1194386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 22:41:14.921853 1194386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 22:41:14.921920 1194386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 22:41:15.049437 1194386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 22:41:15.049548 1194386 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 22:41:16.550294 1194386 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501714672s
	I0731 22:41:16.550407 1194386 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 22:41:22.251964 1194386 kubeadm.go:310] [api-check] The API server is healthy after 5.704368146s
	I0731 22:41:22.264322 1194386 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 22:41:22.281315 1194386 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 22:41:22.321405 1194386 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 22:41:22.321586 1194386 kubeadm.go:310] [mark-control-plane] Marking the node ha-150891 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 22:41:22.335168 1194386 kubeadm.go:310] [bootstrap-token] Using token: x6vrvl.scxwa3uy3g8m39yp
	I0731 22:41:22.336566 1194386 out.go:204]   - Configuring RBAC rules ...
	I0731 22:41:22.336714 1194386 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 22:41:22.344044 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 22:41:22.352698 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 22:41:22.357423 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 22:41:22.362009 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 22:41:22.370027 1194386 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 22:41:22.658209 1194386 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 22:41:23.098221 1194386 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 22:41:23.659119 1194386 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 22:41:23.660605 1194386 kubeadm.go:310] 
	I0731 22:41:23.660707 1194386 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 22:41:23.660718 1194386 kubeadm.go:310] 
	I0731 22:41:23.660809 1194386 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 22:41:23.660818 1194386 kubeadm.go:310] 
	I0731 22:41:23.660854 1194386 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 22:41:23.660944 1194386 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 22:41:23.661006 1194386 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 22:41:23.661016 1194386 kubeadm.go:310] 
	I0731 22:41:23.661087 1194386 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 22:41:23.661095 1194386 kubeadm.go:310] 
	I0731 22:41:23.661163 1194386 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 22:41:23.661178 1194386 kubeadm.go:310] 
	I0731 22:41:23.661254 1194386 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 22:41:23.661368 1194386 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 22:41:23.661449 1194386 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 22:41:23.661456 1194386 kubeadm.go:310] 
	I0731 22:41:23.661542 1194386 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 22:41:23.661673 1194386 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 22:41:23.661696 1194386 kubeadm.go:310] 
	I0731 22:41:23.661818 1194386 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x6vrvl.scxwa3uy3g8m39yp \
	I0731 22:41:23.661947 1194386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef \
	I0731 22:41:23.661971 1194386 kubeadm.go:310] 	--control-plane 
	I0731 22:41:23.661975 1194386 kubeadm.go:310] 
	I0731 22:41:23.662048 1194386 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 22:41:23.662054 1194386 kubeadm.go:310] 
	I0731 22:41:23.662122 1194386 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x6vrvl.scxwa3uy3g8m39yp \
	I0731 22:41:23.662237 1194386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef 
	I0731 22:41:23.662727 1194386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 22:41:23.662761 1194386 cni.go:84] Creating CNI manager for ""
	I0731 22:41:23.662769 1194386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:41:23.664440 1194386 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 22:41:23.665937 1194386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 22:41:23.671296 1194386 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 22:41:23.671319 1194386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 22:41:23.688520 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 22:41:24.012012 1194386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 22:41:24.012150 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:24.012216 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-150891 minikube.k8s.io/updated_at=2024_07_31T22_41_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-150891 minikube.k8s.io/primary=true
	I0731 22:41:24.030175 1194386 ops.go:34] apiserver oom_adj: -16
	I0731 22:41:24.209920 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:24.710079 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:25.210961 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:25.710439 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:26.210064 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:26.710701 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:27.210208 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:27.710168 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:28.210619 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:28.710698 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:29.210551 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:29.710836 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:30.210738 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:30.710521 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:31.210064 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:31.709968 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:32.210941 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:32.710909 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:33.210335 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:33.710327 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:34.210666 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:34.710753 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:35.210848 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:35.710898 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:36.210690 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:36.332701 1194386 kubeadm.go:1113] duration metric: took 12.320634484s to wait for elevateKubeSystemPrivileges
	I0731 22:41:36.332742 1194386 kubeadm.go:394] duration metric: took 24.247425712s to StartCluster
	I0731 22:41:36.332762 1194386 settings.go:142] acquiring lock: {Name:mk076897bfd1af81579aafbccfd5a932e011b343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:36.332873 1194386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:41:36.333675 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:36.333909 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 22:41:36.333919 1194386 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:41:36.333946 1194386 start.go:241] waiting for startup goroutines ...
	I0731 22:41:36.333961 1194386 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 22:41:36.334019 1194386 addons.go:69] Setting storage-provisioner=true in profile "ha-150891"
	I0731 22:41:36.334029 1194386 addons.go:69] Setting default-storageclass=true in profile "ha-150891"
	I0731 22:41:36.334071 1194386 addons.go:234] Setting addon storage-provisioner=true in "ha-150891"
	I0731 22:41:36.334110 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:41:36.334156 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:41:36.334072 1194386 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-150891"
	I0731 22:41:36.335195 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.335272 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.336262 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.336763 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.351259 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0731 22:41:36.351773 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.352285 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.352316 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.352680 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.352903 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:36.355520 1194386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:41:36.355877 1194386 kapi.go:59] client config for ha-150891: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 22:41:36.356454 1194386 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 22:41:36.356689 1194386 addons.go:234] Setting addon default-storageclass=true in "ha-150891"
	I0731 22:41:36.356731 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:41:36.357113 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.357131 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.357957 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0731 22:41:36.358483 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.359098 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.359125 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.359473 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.360078 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.360142 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.373982 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0731 22:41:36.374600 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.375102 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.375129 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.375493 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.376053 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0731 22:41:36.376316 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.376369 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.376516 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.377000 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.377021 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.377391 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.377583 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:36.379506 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:36.381356 1194386 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 22:41:36.382619 1194386 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:41:36.382642 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 22:41:36.382664 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:36.386025 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.386516 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:36.386541 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.386731 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:36.386954 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:36.387136 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:36.387266 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:36.394148 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0731 22:41:36.394615 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.395123 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.395145 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.395468 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.395664 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:36.397216 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:36.397444 1194386 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 22:41:36.397458 1194386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 22:41:36.397472 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:36.400128 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.400612 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:36.400633 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.400866 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:36.401035 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:36.401217 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:36.401326 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:36.482204 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 22:41:36.533458 1194386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:41:36.591754 1194386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 22:41:36.977914 1194386 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 22:41:37.231623 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.231654 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.231701 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.231726 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.231984 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.232023 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232031 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232040 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.232051 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.232105 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232119 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232128 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.232173 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.232244 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.232346 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232354 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.232361 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232502 1194386 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 22:41:37.232513 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232518 1194386 round_trippers.go:469] Request Headers:
	I0731 22:41:37.232526 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232542 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:41:37.232552 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:41:37.248384 1194386 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 22:41:37.249211 1194386 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 22:41:37.249230 1194386 round_trippers.go:469] Request Headers:
	I0731 22:41:37.249242 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:41:37.249249 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:41:37.249256 1194386 round_trippers.go:473]     Content-Type: application/json
	I0731 22:41:37.253415 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:41:37.253901 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.253915 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.254222 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.254237 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.254244 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.256161 1194386 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 22:41:37.257403 1194386 addons.go:510] duration metric: took 923.440407ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 22:41:37.257451 1194386 start.go:246] waiting for cluster config update ...
	I0731 22:41:37.257466 1194386 start.go:255] writing updated cluster config ...
	I0731 22:41:37.259122 1194386 out.go:177] 
	I0731 22:41:37.260573 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:41:37.260653 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:41:37.262170 1194386 out.go:177] * Starting "ha-150891-m02" control-plane node in "ha-150891" cluster
	I0731 22:41:37.263347 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:41:37.263376 1194386 cache.go:56] Caching tarball of preloaded images
	I0731 22:41:37.263489 1194386 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:41:37.263501 1194386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:41:37.263567 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:41:37.263750 1194386 start.go:360] acquireMachinesLock for ha-150891-m02: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:41:37.263792 1194386 start.go:364] duration metric: took 23.392µs to acquireMachinesLock for "ha-150891-m02"
	I0731 22:41:37.263809 1194386 start.go:93] Provisioning new machine with config: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:41:37.263902 1194386 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 22:41:37.265399 1194386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:41:37.265485 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:37.265511 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:37.281435 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0731 22:41:37.281916 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:37.282361 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:37.282382 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:37.282815 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:37.283049 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:41:37.283211 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:41:37.283390 1194386 start.go:159] libmachine.API.Create for "ha-150891" (driver="kvm2")
	I0731 22:41:37.283418 1194386 client.go:168] LocalClient.Create starting
	I0731 22:41:37.283458 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 22:41:37.283500 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:41:37.283520 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:41:37.283591 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 22:41:37.283622 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:41:37.283638 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:41:37.283660 1194386 main.go:141] libmachine: Running pre-create checks...
	I0731 22:41:37.283671 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .PreCreateCheck
	I0731 22:41:37.283878 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetConfigRaw
	I0731 22:41:37.284351 1194386 main.go:141] libmachine: Creating machine...
	I0731 22:41:37.284371 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .Create
	I0731 22:41:37.284521 1194386 main.go:141] libmachine: (ha-150891-m02) Creating KVM machine...
	I0731 22:41:37.285838 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found existing default KVM network
	I0731 22:41:37.285981 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found existing private KVM network mk-ha-150891
	I0731 22:41:37.286143 1194386 main.go:141] libmachine: (ha-150891-m02) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02 ...
	I0731 22:41:37.286168 1194386 main.go:141] libmachine: (ha-150891-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 22:41:37.286228 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.286125 1194775 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:41:37.286362 1194386 main.go:141] libmachine: (ha-150891-m02) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:41:37.559348 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.559213 1194775 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa...
	I0731 22:41:37.747723 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.747586 1194775 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/ha-150891-m02.rawdisk...
	I0731 22:41:37.747751 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Writing magic tar header
	I0731 22:41:37.747761 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Writing SSH key tar header
	I0731 22:41:37.747769 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.747733 1194775 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02 ...
	I0731 22:41:37.747892 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02
	I0731 22:41:37.747917 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 22:41:37.747930 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02 (perms=drwx------)
	I0731 22:41:37.747945 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 22:41:37.747956 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:41:37.747967 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 22:41:37.747980 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 22:41:37.747990 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 22:41:37.747998 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 22:41:37.748005 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 22:41:37.748017 1194386 main.go:141] libmachine: (ha-150891-m02) Creating domain...
	I0731 22:41:37.748033 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 22:41:37.748045 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 22:41:37.748054 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home
	I0731 22:41:37.748063 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Skipping /home - not owner
	I0731 22:41:37.749054 1194386 main.go:141] libmachine: (ha-150891-m02) define libvirt domain using xml: 
	I0731 22:41:37.749082 1194386 main.go:141] libmachine: (ha-150891-m02) <domain type='kvm'>
	I0731 22:41:37.749094 1194386 main.go:141] libmachine: (ha-150891-m02)   <name>ha-150891-m02</name>
	I0731 22:41:37.749102 1194386 main.go:141] libmachine: (ha-150891-m02)   <memory unit='MiB'>2200</memory>
	I0731 22:41:37.749111 1194386 main.go:141] libmachine: (ha-150891-m02)   <vcpu>2</vcpu>
	I0731 22:41:37.749121 1194386 main.go:141] libmachine: (ha-150891-m02)   <features>
	I0731 22:41:37.749126 1194386 main.go:141] libmachine: (ha-150891-m02)     <acpi/>
	I0731 22:41:37.749131 1194386 main.go:141] libmachine: (ha-150891-m02)     <apic/>
	I0731 22:41:37.749137 1194386 main.go:141] libmachine: (ha-150891-m02)     <pae/>
	I0731 22:41:37.749141 1194386 main.go:141] libmachine: (ha-150891-m02)     
	I0731 22:41:37.749152 1194386 main.go:141] libmachine: (ha-150891-m02)   </features>
	I0731 22:41:37.749160 1194386 main.go:141] libmachine: (ha-150891-m02)   <cpu mode='host-passthrough'>
	I0731 22:41:37.749165 1194386 main.go:141] libmachine: (ha-150891-m02)   
	I0731 22:41:37.749169 1194386 main.go:141] libmachine: (ha-150891-m02)   </cpu>
	I0731 22:41:37.749174 1194386 main.go:141] libmachine: (ha-150891-m02)   <os>
	I0731 22:41:37.749179 1194386 main.go:141] libmachine: (ha-150891-m02)     <type>hvm</type>
	I0731 22:41:37.749210 1194386 main.go:141] libmachine: (ha-150891-m02)     <boot dev='cdrom'/>
	I0731 22:41:37.749238 1194386 main.go:141] libmachine: (ha-150891-m02)     <boot dev='hd'/>
	I0731 22:41:37.749250 1194386 main.go:141] libmachine: (ha-150891-m02)     <bootmenu enable='no'/>
	I0731 22:41:37.749261 1194386 main.go:141] libmachine: (ha-150891-m02)   </os>
	I0731 22:41:37.749273 1194386 main.go:141] libmachine: (ha-150891-m02)   <devices>
	I0731 22:41:37.749285 1194386 main.go:141] libmachine: (ha-150891-m02)     <disk type='file' device='cdrom'>
	I0731 22:41:37.749309 1194386 main.go:141] libmachine: (ha-150891-m02)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/boot2docker.iso'/>
	I0731 22:41:37.749324 1194386 main.go:141] libmachine: (ha-150891-m02)       <target dev='hdc' bus='scsi'/>
	I0731 22:41:37.749335 1194386 main.go:141] libmachine: (ha-150891-m02)       <readonly/>
	I0731 22:41:37.749343 1194386 main.go:141] libmachine: (ha-150891-m02)     </disk>
	I0731 22:41:37.749357 1194386 main.go:141] libmachine: (ha-150891-m02)     <disk type='file' device='disk'>
	I0731 22:41:37.749371 1194386 main.go:141] libmachine: (ha-150891-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 22:41:37.749388 1194386 main.go:141] libmachine: (ha-150891-m02)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/ha-150891-m02.rawdisk'/>
	I0731 22:41:37.749404 1194386 main.go:141] libmachine: (ha-150891-m02)       <target dev='hda' bus='virtio'/>
	I0731 22:41:37.749415 1194386 main.go:141] libmachine: (ha-150891-m02)     </disk>
	I0731 22:41:37.749428 1194386 main.go:141] libmachine: (ha-150891-m02)     <interface type='network'>
	I0731 22:41:37.749441 1194386 main.go:141] libmachine: (ha-150891-m02)       <source network='mk-ha-150891'/>
	I0731 22:41:37.749452 1194386 main.go:141] libmachine: (ha-150891-m02)       <model type='virtio'/>
	I0731 22:41:37.749462 1194386 main.go:141] libmachine: (ha-150891-m02)     </interface>
	I0731 22:41:37.749477 1194386 main.go:141] libmachine: (ha-150891-m02)     <interface type='network'>
	I0731 22:41:37.749490 1194386 main.go:141] libmachine: (ha-150891-m02)       <source network='default'/>
	I0731 22:41:37.749511 1194386 main.go:141] libmachine: (ha-150891-m02)       <model type='virtio'/>
	I0731 22:41:37.749523 1194386 main.go:141] libmachine: (ha-150891-m02)     </interface>
	I0731 22:41:37.749534 1194386 main.go:141] libmachine: (ha-150891-m02)     <serial type='pty'>
	I0731 22:41:37.749554 1194386 main.go:141] libmachine: (ha-150891-m02)       <target port='0'/>
	I0731 22:41:37.749572 1194386 main.go:141] libmachine: (ha-150891-m02)     </serial>
	I0731 22:41:37.749585 1194386 main.go:141] libmachine: (ha-150891-m02)     <console type='pty'>
	I0731 22:41:37.749599 1194386 main.go:141] libmachine: (ha-150891-m02)       <target type='serial' port='0'/>
	I0731 22:41:37.749611 1194386 main.go:141] libmachine: (ha-150891-m02)     </console>
	I0731 22:41:37.749621 1194386 main.go:141] libmachine: (ha-150891-m02)     <rng model='virtio'>
	I0731 22:41:37.749631 1194386 main.go:141] libmachine: (ha-150891-m02)       <backend model='random'>/dev/random</backend>
	I0731 22:41:37.749637 1194386 main.go:141] libmachine: (ha-150891-m02)     </rng>
	I0731 22:41:37.749642 1194386 main.go:141] libmachine: (ha-150891-m02)     
	I0731 22:41:37.749649 1194386 main.go:141] libmachine: (ha-150891-m02)     
	I0731 22:41:37.749654 1194386 main.go:141] libmachine: (ha-150891-m02)   </devices>
	I0731 22:41:37.749664 1194386 main.go:141] libmachine: (ha-150891-m02) </domain>
	I0731 22:41:37.749674 1194386 main.go:141] libmachine: (ha-150891-m02) 
	I0731 22:41:37.756306 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:f0:b6:68 in network default
	I0731 22:41:37.756846 1194386 main.go:141] libmachine: (ha-150891-m02) Ensuring networks are active...
	I0731 22:41:37.756870 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:37.757613 1194386 main.go:141] libmachine: (ha-150891-m02) Ensuring network default is active
	I0731 22:41:37.757887 1194386 main.go:141] libmachine: (ha-150891-m02) Ensuring network mk-ha-150891 is active
	I0731 22:41:37.758199 1194386 main.go:141] libmachine: (ha-150891-m02) Getting domain xml...
	I0731 22:41:37.758754 1194386 main.go:141] libmachine: (ha-150891-m02) Creating domain...
	I0731 22:41:39.003049 1194386 main.go:141] libmachine: (ha-150891-m02) Waiting to get IP...
	I0731 22:41:39.003772 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.004166 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.004243 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.004164 1194775 retry.go:31] will retry after 204.235682ms: waiting for machine to come up
	I0731 22:41:39.209779 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.210251 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.210275 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.210205 1194775 retry.go:31] will retry after 356.106914ms: waiting for machine to come up
	I0731 22:41:39.568003 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.568563 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.568595 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.568507 1194775 retry.go:31] will retry after 368.623567ms: waiting for machine to come up
	I0731 22:41:39.939393 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.939920 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.939948 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.939881 1194775 retry.go:31] will retry after 506.801083ms: waiting for machine to come up
	I0731 22:41:40.448839 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:40.449376 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:40.449407 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:40.449339 1194775 retry.go:31] will retry after 477.617493ms: waiting for machine to come up
	I0731 22:41:40.928985 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:40.929381 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:40.929405 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:40.929331 1194775 retry.go:31] will retry after 831.102078ms: waiting for machine to come up
	I0731 22:41:41.762028 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:41.762523 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:41.762547 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:41.762481 1194775 retry.go:31] will retry after 1.114057632s: waiting for machine to come up
	I0731 22:41:42.878288 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:42.878818 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:42.878873 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:42.878793 1194775 retry.go:31] will retry after 903.129066ms: waiting for machine to come up
	I0731 22:41:43.783929 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:43.784448 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:43.784485 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:43.784394 1194775 retry.go:31] will retry after 1.316496541s: waiting for machine to come up
	I0731 22:41:45.102179 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:45.102732 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:45.102762 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:45.102661 1194775 retry.go:31] will retry after 1.883859618s: waiting for machine to come up
	I0731 22:41:46.988949 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:46.989490 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:46.989518 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:46.989440 1194775 retry.go:31] will retry after 2.374845063s: waiting for machine to come up
	I0731 22:41:49.367716 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:49.368167 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:49.368198 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:49.368118 1194775 retry.go:31] will retry after 2.338221125s: waiting for machine to come up
	I0731 22:41:51.710267 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:51.710794 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:51.710831 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:51.710726 1194775 retry.go:31] will retry after 4.46190766s: waiting for machine to come up
	I0731 22:41:56.173775 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:56.174219 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:56.174238 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:56.174193 1194775 retry.go:31] will retry after 5.387637544s: waiting for machine to come up
	I0731 22:42:01.566356 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.566905 1194386 main.go:141] libmachine: (ha-150891-m02) Found IP for machine: 192.168.39.224
	I0731 22:42:01.566930 1194386 main.go:141] libmachine: (ha-150891-m02) Reserving static IP address...
	I0731 22:42:01.566944 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has current primary IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.567290 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find host DHCP lease matching {name: "ha-150891-m02", mac: "52:54:00:60:a1:dd", ip: "192.168.39.224"} in network mk-ha-150891
	I0731 22:42:01.650767 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Getting to WaitForSSH function...
	I0731 22:42:01.650796 1194386 main.go:141] libmachine: (ha-150891-m02) Reserved static IP address: 192.168.39.224
	I0731 22:42:01.650808 1194386 main.go:141] libmachine: (ha-150891-m02) Waiting for SSH to be available...
	I0731 22:42:01.653594 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.654012 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:01.654034 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.654205 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Using SSH client type: external
	I0731 22:42:01.654226 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa (-rw-------)
	I0731 22:42:01.654256 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 22:42:01.654273 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | About to run SSH command:
	I0731 22:42:01.654286 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | exit 0
	I0731 22:42:01.780157 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 22:42:01.780391 1194386 main.go:141] libmachine: (ha-150891-m02) KVM machine creation complete!
	I0731 22:42:01.780772 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetConfigRaw
	I0731 22:42:01.781317 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:01.781493 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:01.781666 1194386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 22:42:01.781681 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:42:01.782914 1194386 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 22:42:01.782931 1194386 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 22:42:01.782937 1194386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 22:42:01.782944 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:01.785402 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.785794 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:01.785833 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.785942 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:01.786151 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.786335 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.786479 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:01.786656 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:01.786873 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:01.786885 1194386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 22:42:01.891508 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:42:01.891533 1194386 main.go:141] libmachine: Detecting the provisioner...
	I0731 22:42:01.891542 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:01.894486 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.894898 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:01.894927 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.895131 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:01.895399 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.895609 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.895789 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:01.895975 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:01.896175 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:01.896190 1194386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 22:42:02.000665 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 22:42:02.000750 1194386 main.go:141] libmachine: found compatible host: buildroot
	I0731 22:42:02.000757 1194386 main.go:141] libmachine: Provisioning with buildroot...
	I0731 22:42:02.000765 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:42:02.001051 1194386 buildroot.go:166] provisioning hostname "ha-150891-m02"
	I0731 22:42:02.001086 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:42:02.001291 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.003876 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.004193 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.004219 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.004367 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.004564 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.004735 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.004851 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.005012 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.005247 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.005266 1194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891-m02 && echo "ha-150891-m02" | sudo tee /etc/hostname
	I0731 22:42:02.121702 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891-m02
	
	I0731 22:42:02.121733 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.124572 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.124994 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.125025 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.125222 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.125470 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.125671 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.125852 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.126053 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.126266 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.126284 1194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:42:02.236267 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:42:02.236298 1194386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:42:02.236316 1194386 buildroot.go:174] setting up certificates
	I0731 22:42:02.236328 1194386 provision.go:84] configureAuth start
	I0731 22:42:02.236337 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:42:02.236654 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:02.239306 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.239684 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.239717 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.239851 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.242139 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.242501 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.242526 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.242723 1194386 provision.go:143] copyHostCerts
	I0731 22:42:02.242769 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:42:02.242812 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:42:02.242824 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:42:02.242908 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:42:02.243007 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:42:02.243033 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:42:02.243043 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:42:02.243087 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:42:02.243150 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:42:02.243175 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:42:02.243184 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:42:02.243220 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:42:02.243309 1194386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891-m02 san=[127.0.0.1 192.168.39.224 ha-150891-m02 localhost minikube]
	I0731 22:42:02.346530 1194386 provision.go:177] copyRemoteCerts
	I0731 22:42:02.346589 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:42:02.346616 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.349524 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.349838 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.349867 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.350116 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.350374 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.350565 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.350712 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:02.431711 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:42:02.431817 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:42:02.455084 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:42:02.455172 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:42:02.478135 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:42:02.478228 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:42:02.501270 1194386 provision.go:87] duration metric: took 264.925805ms to configureAuth
	I0731 22:42:02.501302 1194386 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:42:02.501475 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:02.501561 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.504052 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.504390 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.504418 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.504570 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.504764 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.504908 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.505044 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.505280 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.505451 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.505476 1194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:42:02.765035 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:42:02.765067 1194386 main.go:141] libmachine: Checking connection to Docker...
	I0731 22:42:02.765078 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetURL
	I0731 22:42:02.766389 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Using libvirt version 6000000
	I0731 22:42:02.768395 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.768756 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.768784 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.768967 1194386 main.go:141] libmachine: Docker is up and running!
	I0731 22:42:02.768982 1194386 main.go:141] libmachine: Reticulating splines...
	I0731 22:42:02.768989 1194386 client.go:171] duration metric: took 25.485560762s to LocalClient.Create
	I0731 22:42:02.769012 1194386 start.go:167] duration metric: took 25.485625209s to libmachine.API.Create "ha-150891"
	I0731 22:42:02.769022 1194386 start.go:293] postStartSetup for "ha-150891-m02" (driver="kvm2")
	I0731 22:42:02.769032 1194386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:42:02.769051 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:02.769330 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:42:02.769363 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.771534 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.771903 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.771935 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.772118 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.772330 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.772507 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.772679 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:02.854792 1194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:42:02.859040 1194386 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:42:02.859077 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:42:02.859163 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:42:02.859262 1194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:42:02.859275 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:42:02.859388 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:42:02.869291 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:42:02.892937 1194386 start.go:296] duration metric: took 123.899794ms for postStartSetup
	I0731 22:42:02.892999 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetConfigRaw
	I0731 22:42:02.893710 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:02.896566 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.896951 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.896986 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.897226 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:42:02.897445 1194386 start.go:128] duration metric: took 25.633530271s to createHost
	I0731 22:42:02.897475 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.899680 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.900057 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.900108 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.900233 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.900428 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.900631 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.900779 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.900994 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.901162 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.901172 1194386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:42:03.004868 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465722.982647454
	
	I0731 22:42:03.004901 1194386 fix.go:216] guest clock: 1722465722.982647454
	I0731 22:42:03.004910 1194386 fix.go:229] Guest: 2024-07-31 22:42:02.982647454 +0000 UTC Remote: 2024-07-31 22:42:02.897460391 +0000 UTC m=+82.432245142 (delta=85.187063ms)
	I0731 22:42:03.004929 1194386 fix.go:200] guest clock delta is within tolerance: 85.187063ms
	I0731 22:42:03.004934 1194386 start.go:83] releasing machines lock for "ha-150891-m02", held for 25.741133334s
	I0731 22:42:03.004955 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.005260 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:03.008030 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.008361 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:03.008391 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.011002 1194386 out.go:177] * Found network options:
	I0731 22:42:03.012400 1194386 out.go:177]   - NO_PROXY=192.168.39.105
	W0731 22:42:03.013513 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:42:03.013571 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.014240 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.014443 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.014555 1194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:42:03.014611 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	W0731 22:42:03.014714 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:42:03.014790 1194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:42:03.014814 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:03.017516 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.017542 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.017869 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:03.017897 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.017922 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:03.017935 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.018043 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:03.018143 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:03.018279 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:03.018358 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:03.018435 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:03.018520 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:03.018589 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:03.018643 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:03.246365 1194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:42:03.252053 1194386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:42:03.252152 1194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:42:03.268896 1194386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:42:03.268929 1194386 start.go:495] detecting cgroup driver to use...
	I0731 22:42:03.269022 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:42:03.284943 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:42:03.299484 1194386 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:42:03.299546 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:42:03.313401 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:42:03.327404 1194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:42:03.447515 1194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:42:03.594212 1194386 docker.go:233] disabling docker service ...
	I0731 22:42:03.594293 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:42:03.608736 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:42:03.621935 1194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:42:03.755744 1194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:42:03.864008 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:42:03.876911 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:42:03.894800 1194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:42:03.894864 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.905401 1194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:42:03.905490 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.915927 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.926411 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.936885 1194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:42:03.947334 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.957785 1194386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.974821 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.984854 1194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:42:03.994141 1194386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 22:42:03.994210 1194386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 22:42:04.009379 1194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:42:04.019163 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:42:04.135711 1194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:42:04.270607 1194386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:42:04.270689 1194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:42:04.276136 1194386 start.go:563] Will wait 60s for crictl version
	I0731 22:42:04.276200 1194386 ssh_runner.go:195] Run: which crictl
	I0731 22:42:04.279737 1194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:42:04.320910 1194386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:42:04.321025 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:42:04.349689 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:42:04.381472 1194386 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:42:04.382837 1194386 out.go:177]   - env NO_PROXY=192.168.39.105
	I0731 22:42:04.384018 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:04.386994 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:04.387410 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:04.387440 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:04.387682 1194386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:42:04.391813 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:42:04.406019 1194386 mustload.go:65] Loading cluster: ha-150891
	I0731 22:42:04.406249 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:04.406532 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:04.406568 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:04.422418 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0731 22:42:04.422891 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:04.423334 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:04.423357 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:04.423682 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:04.423895 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:42:04.425517 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:42:04.425820 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:04.425849 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:04.442819 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0731 22:42:04.443314 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:04.443827 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:04.443857 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:04.444275 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:04.444530 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:42:04.444699 1194386 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.224
	I0731 22:42:04.444713 1194386 certs.go:194] generating shared ca certs ...
	I0731 22:42:04.444735 1194386 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:42:04.444901 1194386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:42:04.444953 1194386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:42:04.444966 1194386 certs.go:256] generating profile certs ...
	I0731 22:42:04.445066 1194386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:42:04.445100 1194386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574
	I0731 22:42:04.445120 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.224 192.168.39.254]
	I0731 22:42:04.566994 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574 ...
	I0731 22:42:04.567034 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574: {Name:mk440b38c075a0d1eded7b1aea3015c7a2eb447d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:42:04.567215 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574 ...
	I0731 22:42:04.567230 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574: {Name:mk538452a64b13906f2016b6f80157ab13990994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:42:04.567331 1194386 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:42:04.567522 1194386 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:42:04.567731 1194386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:42:04.567755 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:42:04.567773 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:42:04.567791 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:42:04.567809 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:42:04.567825 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:42:04.567839 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:42:04.567852 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:42:04.567870 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:42:04.567934 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:42:04.567983 1194386 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:42:04.568005 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:42:04.568039 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:42:04.568076 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:42:04.568130 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:42:04.568195 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:42:04.568244 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:04.568267 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:42:04.568285 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:42:04.568333 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:42:04.571657 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:04.572134 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:42:04.572166 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:04.572399 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:42:04.572650 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:42:04.572864 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:42:04.573040 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:42:04.648556 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:42:04.653487 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:42:04.664738 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:42:04.668859 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 22:42:04.679711 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:42:04.684221 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:42:04.694519 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:42:04.698803 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 22:42:04.709664 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:42:04.713661 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:42:04.723978 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:42:04.727811 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:42:04.738933 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:42:04.763882 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:42:04.788528 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:42:04.811934 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:42:04.834980 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 22:42:04.857764 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:42:04.880689 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:42:04.903429 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:42:04.926716 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:42:04.949405 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:42:04.972434 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:42:04.995700 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:42:05.013939 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 22:42:05.031874 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:42:05.048327 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 22:42:05.066409 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:42:05.084380 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:42:05.101397 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:42:05.117920 1194386 ssh_runner.go:195] Run: openssl version
	I0731 22:42:05.123328 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:42:05.134172 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:42:05.138516 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:42:05.138597 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:42:05.144305 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:42:05.155324 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:42:05.166221 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:42:05.170475 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:42:05.170539 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:42:05.176184 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:42:05.187088 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:42:05.198203 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:05.202718 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:05.202780 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:05.208308 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:42:05.226938 1194386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:42:05.231471 1194386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:42:05.231536 1194386 kubeadm.go:934] updating node {m02 192.168.39.224 8443 v1.30.3 crio true true} ...
	I0731 22:42:05.231698 1194386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:42:05.231740 1194386 kube-vip.go:115] generating kube-vip config ...
	I0731 22:42:05.231795 1194386 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:42:05.246943 1194386 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:42:05.247027 1194386 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:42:05.247083 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:42:05.256652 1194386 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:42:05.256734 1194386 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:42:05.266557 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 22:42:05.266586 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:42:05.266584 1194386 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 22:42:05.266662 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:42:05.266589 1194386 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 22:42:05.270908 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:42:05.270950 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:42:08.567796 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:42:08.567914 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:42:08.572759 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:42:08.572795 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:42:09.773119 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:42:09.787520 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:42:09.787632 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:42:09.792189 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:42:09.792247 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:42:10.200907 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:42:10.210379 1194386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:42:10.227553 1194386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:42:10.244445 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 22:42:10.260996 1194386 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:42:10.265160 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:42:10.277149 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:42:10.390340 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:42:10.406449 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:42:10.406959 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:10.407022 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:10.422762 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0731 22:42:10.423264 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:10.423777 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:10.423801 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:10.424216 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:10.424463 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:42:10.424666 1194386 start.go:317] joinCluster: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:42:10.424772 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:42:10.424789 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:42:10.427571 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:10.428041 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:42:10.428078 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:10.428248 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:42:10.428481 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:42:10.428657 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:42:10.428809 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:42:10.578319 1194386 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:42:10.578380 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x5cnck.ovzvspqpct86akxh --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443"
	I0731 22:42:32.726020 1194386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x5cnck.ovzvspqpct86akxh --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443": (22.147610246s)
	I0731 22:42:32.726067 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:42:33.258410 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-150891-m02 minikube.k8s.io/updated_at=2024_07_31T22_42_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-150891 minikube.k8s.io/primary=false
	I0731 22:42:33.413377 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-150891-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:42:33.521866 1194386 start.go:319] duration metric: took 23.097192701s to joinCluster
	I0731 22:42:33.521958 1194386 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:42:33.522369 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:33.523496 1194386 out.go:177] * Verifying Kubernetes components...
	I0731 22:42:33.524799 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:42:33.754309 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:42:33.774742 1194386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:42:33.775039 1194386 kapi.go:59] client config for ha-150891: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:42:33.775125 1194386 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.105:8443
	I0731 22:42:33.775384 1194386 node_ready.go:35] waiting up to 6m0s for node "ha-150891-m02" to be "Ready" ...
	I0731 22:42:33.775519 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:33.775532 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:33.775544 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:33.775552 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:33.796760 1194386 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0731 22:42:34.275675 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:34.275712 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:34.275724 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:34.275729 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:34.280296 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:42:34.776268 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:34.776296 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:34.776305 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:34.776308 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:34.779673 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:35.276623 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:35.276655 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:35.276663 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:35.276666 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:35.280144 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:35.776532 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:35.776560 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:35.776573 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:35.776578 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:35.779865 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:35.780741 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:36.276038 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:36.276065 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:36.276074 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:36.276080 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:36.279619 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:36.776522 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:36.776555 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:36.776566 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:36.776572 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:36.780073 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:37.275933 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:37.275963 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:37.275971 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:37.275976 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:37.279387 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:37.775927 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:37.775951 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:37.775962 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:37.775968 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:37.779256 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:38.276341 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:38.276366 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:38.276375 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:38.276380 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:38.279916 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:38.280578 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:38.776501 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:38.776526 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:38.776535 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:38.776539 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:38.779625 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:39.276621 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:39.276647 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:39.276658 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:39.276663 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:39.280311 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:39.776612 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:39.776636 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:39.776644 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:39.776648 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:39.779992 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:40.276042 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:40.276071 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:40.276079 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:40.276083 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:40.279206 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:40.775762 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:40.775791 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:40.775799 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:40.775804 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:40.778868 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:40.779411 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:41.275678 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:41.275710 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:41.275723 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:41.275729 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:41.279076 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:41.775913 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:41.775942 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:41.775954 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:41.775961 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:41.779277 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:42.276066 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:42.276113 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:42.276124 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:42.276130 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:42.279694 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:42.776191 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:42.776225 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:42.776236 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:42.776240 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:42.779748 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:42.780220 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:43.276407 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:43.276436 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:43.276449 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:43.276454 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:43.280052 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:43.776052 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:43.776078 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:43.776096 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:43.776101 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:43.779136 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:44.276115 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:44.276144 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:44.276153 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:44.276158 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:44.279340 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:44.776148 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:44.776174 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:44.776183 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:44.776189 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:44.779238 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:45.276307 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:45.276335 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:45.276343 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:45.276347 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:45.279538 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:45.280200 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:45.775892 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:45.775920 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:45.775928 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:45.775931 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:45.778889 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:46.275874 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:46.275901 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:46.275909 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:46.275912 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:46.279335 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:46.775583 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:46.775610 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:46.775619 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:46.775623 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:46.778852 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:47.276650 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:47.276675 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:47.276690 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:47.276694 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:47.281842 1194386 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:42:47.282342 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:47.776361 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:47.776390 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:47.776401 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:47.776405 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:47.779395 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:48.276454 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:48.276492 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:48.276506 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:48.276515 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:48.279677 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:48.776600 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:48.776631 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:48.776640 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:48.776644 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:48.780123 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:49.275930 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:49.275955 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:49.275964 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:49.275968 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:49.279328 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:49.775711 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:49.775743 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:49.775753 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:49.775758 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:49.778833 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:49.779323 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:50.276467 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:50.276496 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.276505 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.276510 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.281180 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:42:50.281626 1194386 node_ready.go:49] node "ha-150891-m02" has status "Ready":"True"
	I0731 22:42:50.281647 1194386 node_ready.go:38] duration metric: took 16.506246165s for node "ha-150891-m02" to be "Ready" ...
	I0731 22:42:50.281657 1194386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:42:50.281758 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:50.281768 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.281776 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.281779 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.288346 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:42:50.295951 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.296054 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4928n
	I0731 22:42:50.296063 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.296071 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.296075 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.299556 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.300377 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:50.300397 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.300406 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.300413 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.304009 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.304562 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.304585 1194386 pod_ready.go:81] duration metric: took 8.598705ms for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.304599 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.304676 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rkd4j
	I0731 22:42:50.304687 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.304698 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.304704 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.308419 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.309357 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:50.309373 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.309380 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.309387 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.313309 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.313850 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.313869 1194386 pod_ready.go:81] duration metric: took 9.262271ms for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.313879 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.313942 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891
	I0731 22:42:50.313949 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.313956 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.313965 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.317276 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.317844 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:50.317859 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.317867 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.317871 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.321050 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.321564 1194386 pod_ready.go:92] pod "etcd-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.321590 1194386 pod_ready.go:81] duration metric: took 7.70537ms for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.321601 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.321664 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m02
	I0731 22:42:50.321671 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.321679 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.321687 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.324603 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:50.325225 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:50.325239 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.325246 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.325255 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.328213 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:50.821988 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m02
	I0731 22:42:50.822013 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.822020 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.822024 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.825560 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.826161 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:50.826177 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.826186 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.826190 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.829105 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:50.829539 1194386 pod_ready.go:92] pod "etcd-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.829558 1194386 pod_ready.go:81] duration metric: took 507.948191ms for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.829580 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.876995 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891
	I0731 22:42:50.877023 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.877035 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.877042 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.880746 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.076681 1194386 request.go:629] Waited for 195.317866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.076815 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.076843 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.076854 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.076859 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.080647 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.081184 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:51.081207 1194386 pod_ready.go:81] duration metric: took 251.615168ms for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.081218 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.276660 1194386 request.go:629] Waited for 195.356743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:42:51.276726 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:42:51.276733 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.276742 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.276750 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.280464 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.477266 1194386 request.go:629] Waited for 196.12777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:51.477356 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:51.477361 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.477369 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.477376 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.480688 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.481224 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:51.481246 1194386 pod_ready.go:81] duration metric: took 400.020752ms for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.481262 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.677268 1194386 request.go:629] Waited for 195.916954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:42:51.677346 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:42:51.677354 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.677367 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.677378 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.680623 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.876558 1194386 request.go:629] Waited for 195.306596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.876630 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.876636 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.876644 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.876648 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.879814 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.880342 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:51.880364 1194386 pod_ready.go:81] duration metric: took 399.094253ms for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.880374 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.077442 1194386 request.go:629] Waited for 196.991894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:42:52.077546 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:42:52.077557 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.077566 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.077571 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.081112 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.277324 1194386 request.go:629] Waited for 195.412254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:52.277400 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:52.277405 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.277413 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.277421 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.280633 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.281150 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:52.281172 1194386 pod_ready.go:81] duration metric: took 400.792125ms for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.281186 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.476601 1194386 request.go:629] Waited for 195.339584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:42:52.476671 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:42:52.476676 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.476684 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.476688 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.480373 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.676485 1194386 request.go:629] Waited for 195.265215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:52.676577 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:52.676583 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.676592 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.676598 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.679895 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.680470 1194386 pod_ready.go:92] pod "kube-proxy-9xcss" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:52.680498 1194386 pod_ready.go:81] duration metric: took 399.303657ms for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.680509 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.877545 1194386 request.go:629] Waited for 196.954806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:42:52.877638 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:42:52.877644 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.877652 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.877658 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.880856 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.076949 1194386 request.go:629] Waited for 195.422276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.077046 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.077051 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.077060 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.077069 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.080155 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.080589 1194386 pod_ready.go:92] pod "kube-proxy-nmkp9" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:53.080608 1194386 pod_ready.go:81] duration metric: took 400.092371ms for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.080618 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.276840 1194386 request.go:629] Waited for 196.118028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:42:53.276913 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:42:53.276918 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.276927 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.276932 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.280453 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.477535 1194386 request.go:629] Waited for 196.281182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:53.477639 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:53.477652 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.477663 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.477672 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.480684 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:53.481253 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:53.481282 1194386 pod_ready.go:81] duration metric: took 400.655466ms for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.481297 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.677319 1194386 request.go:629] Waited for 195.9186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:42:53.677387 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:42:53.677393 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.677401 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.677408 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.680839 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.876870 1194386 request.go:629] Waited for 195.375145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.876947 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.876952 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.876961 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.876965 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.880151 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.880910 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:53.880938 1194386 pod_ready.go:81] duration metric: took 399.629245ms for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.880953 1194386 pod_ready.go:38] duration metric: took 3.599257708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:42:53.880977 1194386 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:42:53.881057 1194386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:42:53.895803 1194386 api_server.go:72] duration metric: took 20.373791047s to wait for apiserver process to appear ...
	I0731 22:42:53.895843 1194386 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:42:53.895873 1194386 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0731 22:42:53.903218 1194386 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0731 22:42:53.903305 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/version
	I0731 22:42:53.903314 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.903322 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.903330 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.904681 1194386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 22:42:53.904825 1194386 api_server.go:141] control plane version: v1.30.3
	I0731 22:42:53.904851 1194386 api_server.go:131] duration metric: took 8.998033ms to wait for apiserver health ...
	I0731 22:42:53.904863 1194386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:42:54.077394 1194386 request.go:629] Waited for 172.399936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.077460 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.077465 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.077480 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.077485 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.082964 1194386 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:42:54.089092 1194386 system_pods.go:59] 17 kube-system pods found
	I0731 22:42:54.089130 1194386 system_pods.go:61] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:42:54.089136 1194386 system_pods.go:61] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:42:54.089140 1194386 system_pods.go:61] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:42:54.089143 1194386 system_pods.go:61] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:42:54.089146 1194386 system_pods.go:61] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:42:54.089149 1194386 system_pods.go:61] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:42:54.089152 1194386 system_pods.go:61] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:42:54.089154 1194386 system_pods.go:61] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:42:54.089157 1194386 system_pods.go:61] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:42:54.089160 1194386 system_pods.go:61] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:42:54.089163 1194386 system_pods.go:61] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:42:54.089166 1194386 system_pods.go:61] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:42:54.089169 1194386 system_pods.go:61] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:42:54.089171 1194386 system_pods.go:61] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:42:54.089174 1194386 system_pods.go:61] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:42:54.089177 1194386 system_pods.go:61] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:42:54.089180 1194386 system_pods.go:61] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:42:54.089187 1194386 system_pods.go:74] duration metric: took 184.313443ms to wait for pod list to return data ...
	I0731 22:42:54.089198 1194386 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:42:54.276626 1194386 request.go:629] Waited for 187.306183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:42:54.276715 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:42:54.276727 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.276736 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.276744 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.279860 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:54.280186 1194386 default_sa.go:45] found service account: "default"
	I0731 22:42:54.280209 1194386 default_sa.go:55] duration metric: took 191.004768ms for default service account to be created ...
	I0731 22:42:54.280218 1194386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:42:54.477457 1194386 request.go:629] Waited for 197.165061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.477540 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.477547 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.477558 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.477567 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.482433 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:42:54.486954 1194386 system_pods.go:86] 17 kube-system pods found
	I0731 22:42:54.486987 1194386 system_pods.go:89] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:42:54.486992 1194386 system_pods.go:89] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:42:54.486997 1194386 system_pods.go:89] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:42:54.487001 1194386 system_pods.go:89] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:42:54.487005 1194386 system_pods.go:89] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:42:54.487009 1194386 system_pods.go:89] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:42:54.487013 1194386 system_pods.go:89] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:42:54.487017 1194386 system_pods.go:89] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:42:54.487021 1194386 system_pods.go:89] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:42:54.487025 1194386 system_pods.go:89] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:42:54.487030 1194386 system_pods.go:89] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:42:54.487033 1194386 system_pods.go:89] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:42:54.487039 1194386 system_pods.go:89] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:42:54.487045 1194386 system_pods.go:89] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:42:54.487049 1194386 system_pods.go:89] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:42:54.487052 1194386 system_pods.go:89] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:42:54.487056 1194386 system_pods.go:89] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:42:54.487063 1194386 system_pods.go:126] duration metric: took 206.839613ms to wait for k8s-apps to be running ...
	I0731 22:42:54.487073 1194386 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:42:54.487118 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:42:54.504629 1194386 system_svc.go:56] duration metric: took 17.54447ms WaitForService to wait for kubelet
	I0731 22:42:54.504662 1194386 kubeadm.go:582] duration metric: took 20.982660012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:42:54.504685 1194386 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:42:54.677167 1194386 request.go:629] Waited for 172.369878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes
	I0731 22:42:54.677247 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes
	I0731 22:42:54.677256 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.677269 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.677278 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.680340 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:54.681073 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:42:54.681098 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:42:54.681110 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:42:54.681114 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:42:54.681118 1194386 node_conditions.go:105] duration metric: took 176.428527ms to run NodePressure ...
	I0731 22:42:54.681130 1194386 start.go:241] waiting for startup goroutines ...
	I0731 22:42:54.681156 1194386 start.go:255] writing updated cluster config ...
	I0731 22:42:54.683187 1194386 out.go:177] 
	I0731 22:42:54.684527 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:54.684624 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:42:54.686141 1194386 out.go:177] * Starting "ha-150891-m03" control-plane node in "ha-150891" cluster
	I0731 22:42:54.687148 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:42:54.687180 1194386 cache.go:56] Caching tarball of preloaded images
	I0731 22:42:54.687312 1194386 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:42:54.687324 1194386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:42:54.687418 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:42:54.687611 1194386 start.go:360] acquireMachinesLock for ha-150891-m03: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:42:54.687678 1194386 start.go:364] duration metric: took 29.245µs to acquireMachinesLock for "ha-150891-m03"
	I0731 22:42:54.687698 1194386 start.go:93] Provisioning new machine with config: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:42:54.687796 1194386 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 22:42:54.689140 1194386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:42:54.689312 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:54.689350 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:54.705381 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45415
	I0731 22:42:54.705867 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:54.706370 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:54.706394 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:54.706726 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:54.706922 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:42:54.707047 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:42:54.707211 1194386 start.go:159] libmachine.API.Create for "ha-150891" (driver="kvm2")
	I0731 22:42:54.707245 1194386 client.go:168] LocalClient.Create starting
	I0731 22:42:54.707288 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 22:42:54.707333 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:42:54.707357 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:42:54.707429 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 22:42:54.707457 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:42:54.707475 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:42:54.707496 1194386 main.go:141] libmachine: Running pre-create checks...
	I0731 22:42:54.707509 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .PreCreateCheck
	I0731 22:42:54.707700 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetConfigRaw
	I0731 22:42:54.708192 1194386 main.go:141] libmachine: Creating machine...
	I0731 22:42:54.708210 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .Create
	I0731 22:42:54.708351 1194386 main.go:141] libmachine: (ha-150891-m03) Creating KVM machine...
	I0731 22:42:54.709626 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found existing default KVM network
	I0731 22:42:54.709793 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found existing private KVM network mk-ha-150891
	I0731 22:42:54.709952 1194386 main.go:141] libmachine: (ha-150891-m03) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03 ...
	I0731 22:42:54.709979 1194386 main.go:141] libmachine: (ha-150891-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 22:42:54.710089 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:54.709964 1195189 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:42:54.710164 1194386 main.go:141] libmachine: (ha-150891-m03) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:42:54.996918 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:54.996772 1195189 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa...
	I0731 22:42:55.135913 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:55.135778 1195189 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/ha-150891-m03.rawdisk...
	I0731 22:42:55.135944 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Writing magic tar header
	I0731 22:42:55.135954 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Writing SSH key tar header
	I0731 22:42:55.135962 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:55.135923 1195189 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03 ...
	I0731 22:42:55.136120 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03
	I0731 22:42:55.136157 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03 (perms=drwx------)
	I0731 22:42:55.136173 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 22:42:55.136196 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:42:55.136208 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 22:42:55.136224 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 22:42:55.136243 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 22:42:55.136254 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 22:42:55.136268 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home
	I0731 22:42:55.136279 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Skipping /home - not owner
	I0731 22:42:55.136295 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 22:42:55.136307 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 22:42:55.136316 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 22:42:55.136321 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 22:42:55.136329 1194386 main.go:141] libmachine: (ha-150891-m03) Creating domain...
	I0731 22:42:55.137274 1194386 main.go:141] libmachine: (ha-150891-m03) define libvirt domain using xml: 
	I0731 22:42:55.137306 1194386 main.go:141] libmachine: (ha-150891-m03) <domain type='kvm'>
	I0731 22:42:55.137350 1194386 main.go:141] libmachine: (ha-150891-m03)   <name>ha-150891-m03</name>
	I0731 22:42:55.137378 1194386 main.go:141] libmachine: (ha-150891-m03)   <memory unit='MiB'>2200</memory>
	I0731 22:42:55.137413 1194386 main.go:141] libmachine: (ha-150891-m03)   <vcpu>2</vcpu>
	I0731 22:42:55.137437 1194386 main.go:141] libmachine: (ha-150891-m03)   <features>
	I0731 22:42:55.137453 1194386 main.go:141] libmachine: (ha-150891-m03)     <acpi/>
	I0731 22:42:55.137460 1194386 main.go:141] libmachine: (ha-150891-m03)     <apic/>
	I0731 22:42:55.137468 1194386 main.go:141] libmachine: (ha-150891-m03)     <pae/>
	I0731 22:42:55.137475 1194386 main.go:141] libmachine: (ha-150891-m03)     
	I0731 22:42:55.137482 1194386 main.go:141] libmachine: (ha-150891-m03)   </features>
	I0731 22:42:55.137491 1194386 main.go:141] libmachine: (ha-150891-m03)   <cpu mode='host-passthrough'>
	I0731 22:42:55.137499 1194386 main.go:141] libmachine: (ha-150891-m03)   
	I0731 22:42:55.137514 1194386 main.go:141] libmachine: (ha-150891-m03)   </cpu>
	I0731 22:42:55.137535 1194386 main.go:141] libmachine: (ha-150891-m03)   <os>
	I0731 22:42:55.137544 1194386 main.go:141] libmachine: (ha-150891-m03)     <type>hvm</type>
	I0731 22:42:55.137554 1194386 main.go:141] libmachine: (ha-150891-m03)     <boot dev='cdrom'/>
	I0731 22:42:55.137562 1194386 main.go:141] libmachine: (ha-150891-m03)     <boot dev='hd'/>
	I0731 22:42:55.137572 1194386 main.go:141] libmachine: (ha-150891-m03)     <bootmenu enable='no'/>
	I0731 22:42:55.137587 1194386 main.go:141] libmachine: (ha-150891-m03)   </os>
	I0731 22:42:55.137599 1194386 main.go:141] libmachine: (ha-150891-m03)   <devices>
	I0731 22:42:55.137610 1194386 main.go:141] libmachine: (ha-150891-m03)     <disk type='file' device='cdrom'>
	I0731 22:42:55.137626 1194386 main.go:141] libmachine: (ha-150891-m03)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/boot2docker.iso'/>
	I0731 22:42:55.137637 1194386 main.go:141] libmachine: (ha-150891-m03)       <target dev='hdc' bus='scsi'/>
	I0731 22:42:55.137656 1194386 main.go:141] libmachine: (ha-150891-m03)       <readonly/>
	I0731 22:42:55.137671 1194386 main.go:141] libmachine: (ha-150891-m03)     </disk>
	I0731 22:42:55.137685 1194386 main.go:141] libmachine: (ha-150891-m03)     <disk type='file' device='disk'>
	I0731 22:42:55.137697 1194386 main.go:141] libmachine: (ha-150891-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 22:42:55.137721 1194386 main.go:141] libmachine: (ha-150891-m03)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/ha-150891-m03.rawdisk'/>
	I0731 22:42:55.137731 1194386 main.go:141] libmachine: (ha-150891-m03)       <target dev='hda' bus='virtio'/>
	I0731 22:42:55.137741 1194386 main.go:141] libmachine: (ha-150891-m03)     </disk>
	I0731 22:42:55.137756 1194386 main.go:141] libmachine: (ha-150891-m03)     <interface type='network'>
	I0731 22:42:55.137769 1194386 main.go:141] libmachine: (ha-150891-m03)       <source network='mk-ha-150891'/>
	I0731 22:42:55.137780 1194386 main.go:141] libmachine: (ha-150891-m03)       <model type='virtio'/>
	I0731 22:42:55.137789 1194386 main.go:141] libmachine: (ha-150891-m03)     </interface>
	I0731 22:42:55.137805 1194386 main.go:141] libmachine: (ha-150891-m03)     <interface type='network'>
	I0731 22:42:55.137816 1194386 main.go:141] libmachine: (ha-150891-m03)       <source network='default'/>
	I0731 22:42:55.137824 1194386 main.go:141] libmachine: (ha-150891-m03)       <model type='virtio'/>
	I0731 22:42:55.137835 1194386 main.go:141] libmachine: (ha-150891-m03)     </interface>
	I0731 22:42:55.137843 1194386 main.go:141] libmachine: (ha-150891-m03)     <serial type='pty'>
	I0731 22:42:55.137854 1194386 main.go:141] libmachine: (ha-150891-m03)       <target port='0'/>
	I0731 22:42:55.137863 1194386 main.go:141] libmachine: (ha-150891-m03)     </serial>
	I0731 22:42:55.137871 1194386 main.go:141] libmachine: (ha-150891-m03)     <console type='pty'>
	I0731 22:42:55.137881 1194386 main.go:141] libmachine: (ha-150891-m03)       <target type='serial' port='0'/>
	I0731 22:42:55.137905 1194386 main.go:141] libmachine: (ha-150891-m03)     </console>
	I0731 22:42:55.137930 1194386 main.go:141] libmachine: (ha-150891-m03)     <rng model='virtio'>
	I0731 22:42:55.137946 1194386 main.go:141] libmachine: (ha-150891-m03)       <backend model='random'>/dev/random</backend>
	I0731 22:42:55.137962 1194386 main.go:141] libmachine: (ha-150891-m03)     </rng>
	I0731 22:42:55.137974 1194386 main.go:141] libmachine: (ha-150891-m03)     
	I0731 22:42:55.137984 1194386 main.go:141] libmachine: (ha-150891-m03)     
	I0731 22:42:55.137995 1194386 main.go:141] libmachine: (ha-150891-m03)   </devices>
	I0731 22:42:55.138005 1194386 main.go:141] libmachine: (ha-150891-m03) </domain>
	I0731 22:42:55.138016 1194386 main.go:141] libmachine: (ha-150891-m03) 
	I0731 22:42:55.145140 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:94:0b:a4 in network default
	I0731 22:42:55.145655 1194386 main.go:141] libmachine: (ha-150891-m03) Ensuring networks are active...
	I0731 22:42:55.145678 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:55.146466 1194386 main.go:141] libmachine: (ha-150891-m03) Ensuring network default is active
	I0731 22:42:55.146839 1194386 main.go:141] libmachine: (ha-150891-m03) Ensuring network mk-ha-150891 is active
	I0731 22:42:55.147165 1194386 main.go:141] libmachine: (ha-150891-m03) Getting domain xml...
	I0731 22:42:55.147949 1194386 main.go:141] libmachine: (ha-150891-m03) Creating domain...
	I0731 22:42:56.412263 1194386 main.go:141] libmachine: (ha-150891-m03) Waiting to get IP...
	I0731 22:42:56.413215 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:56.413614 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:56.413666 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:56.413618 1195189 retry.go:31] will retry after 311.711502ms: waiting for machine to come up
	I0731 22:42:56.727500 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:56.728058 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:56.728083 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:56.728017 1195189 retry.go:31] will retry after 377.689252ms: waiting for machine to come up
	I0731 22:42:57.107777 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:57.108222 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:57.108253 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:57.108160 1195189 retry.go:31] will retry after 361.803769ms: waiting for machine to come up
	I0731 22:42:57.471861 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:57.472344 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:57.472374 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:57.472289 1195189 retry.go:31] will retry after 366.370663ms: waiting for machine to come up
	I0731 22:42:57.839750 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:57.840206 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:57.840239 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:57.840155 1195189 retry.go:31] will retry after 589.677038ms: waiting for machine to come up
	I0731 22:42:58.432138 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:58.432590 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:58.432631 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:58.432495 1195189 retry.go:31] will retry after 639.331637ms: waiting for machine to come up
	I0731 22:42:59.074637 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:59.075071 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:59.075098 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:59.075035 1195189 retry.go:31] will retry after 1.165105041s: waiting for machine to come up
	I0731 22:43:00.241778 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:00.242278 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:00.242314 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:00.242248 1195189 retry.go:31] will retry after 1.417874278s: waiting for machine to come up
	I0731 22:43:01.661880 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:01.662343 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:01.662376 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:01.662294 1195189 retry.go:31] will retry after 1.838176737s: waiting for machine to come up
	I0731 22:43:03.503498 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:03.504051 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:03.504072 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:03.504005 1195189 retry.go:31] will retry after 1.866715326s: waiting for machine to come up
	I0731 22:43:05.371904 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:05.372437 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:05.372465 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:05.372367 1195189 retry.go:31] will retry after 2.815377302s: waiting for machine to come up
	I0731 22:43:08.189148 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:08.189639 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:08.189664 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:08.189609 1195189 retry.go:31] will retry after 3.016103993s: waiting for machine to come up
	I0731 22:43:11.207889 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:11.208362 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:11.208388 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:11.208303 1195189 retry.go:31] will retry after 2.745386751s: waiting for machine to come up
	I0731 22:43:13.955701 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:13.956167 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:13.956194 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:13.956116 1195189 retry.go:31] will retry after 3.553091765s: waiting for machine to come up
	I0731 22:43:17.512455 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.512924 1194386 main.go:141] libmachine: (ha-150891-m03) Found IP for machine: 192.168.39.241
	I0731 22:43:17.512950 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has current primary IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.512958 1194386 main.go:141] libmachine: (ha-150891-m03) Reserving static IP address...
	I0731 22:43:17.513491 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find host DHCP lease matching {name: "ha-150891-m03", mac: "52:54:00:f8:ec:6d", ip: "192.168.39.241"} in network mk-ha-150891
	I0731 22:43:17.598408 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Getting to WaitForSSH function...
	I0731 22:43:17.598436 1194386 main.go:141] libmachine: (ha-150891-m03) Reserved static IP address: 192.168.39.241
	I0731 22:43:17.598449 1194386 main.go:141] libmachine: (ha-150891-m03) Waiting for SSH to be available...
	I0731 22:43:17.601142 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.601539 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.601572 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.601699 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Using SSH client type: external
	I0731 22:43:17.601725 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa (-rw-------)
	I0731 22:43:17.601757 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 22:43:17.601769 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | About to run SSH command:
	I0731 22:43:17.601784 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | exit 0
	I0731 22:43:17.724181 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 22:43:17.724471 1194386 main.go:141] libmachine: (ha-150891-m03) KVM machine creation complete!
	I0731 22:43:17.724848 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetConfigRaw
	I0731 22:43:17.725444 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:17.725691 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:17.725856 1194386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 22:43:17.725871 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:43:17.727131 1194386 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 22:43:17.727148 1194386 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 22:43:17.727154 1194386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 22:43:17.727160 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:17.729961 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.730388 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.730415 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.730567 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:17.730782 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.731011 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.731179 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:17.731365 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:17.731622 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:17.731635 1194386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 22:43:17.835468 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:43:17.835492 1194386 main.go:141] libmachine: Detecting the provisioner...
	I0731 22:43:17.835513 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:17.838605 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.839065 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.839092 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.839314 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:17.839552 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.839722 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.839912 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:17.840133 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:17.840305 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:17.840317 1194386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 22:43:17.944814 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 22:43:17.944915 1194386 main.go:141] libmachine: found compatible host: buildroot
	I0731 22:43:17.944929 1194386 main.go:141] libmachine: Provisioning with buildroot...
	I0731 22:43:17.944943 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:43:17.945227 1194386 buildroot.go:166] provisioning hostname "ha-150891-m03"
	I0731 22:43:17.945244 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:43:17.945453 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:17.948348 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.948753 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.948787 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.948985 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:17.949167 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.949321 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.949437 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:17.949660 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:17.949878 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:17.949892 1194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891-m03 && echo "ha-150891-m03" | sudo tee /etc/hostname
	I0731 22:43:18.066362 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891-m03
	
	I0731 22:43:18.066394 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.069257 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.069654 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.069688 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.069904 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.070126 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.070313 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.070438 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.070633 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:18.070846 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:18.070863 1194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:43:18.185515 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:43:18.185558 1194386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:43:18.185582 1194386 buildroot.go:174] setting up certificates
	I0731 22:43:18.185602 1194386 provision.go:84] configureAuth start
	I0731 22:43:18.185620 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:43:18.185957 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:18.188745 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.189101 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.189126 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.189318 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.191804 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.192159 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.192188 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.192322 1194386 provision.go:143] copyHostCerts
	I0731 22:43:18.192359 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:43:18.192402 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:43:18.192413 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:43:18.192479 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:43:18.192559 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:43:18.192583 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:43:18.192590 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:43:18.192615 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:43:18.192661 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:43:18.192679 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:43:18.192683 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:43:18.192708 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:43:18.192755 1194386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891-m03 san=[127.0.0.1 192.168.39.241 ha-150891-m03 localhost minikube]
	I0731 22:43:18.331536 1194386 provision.go:177] copyRemoteCerts
	I0731 22:43:18.331616 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:43:18.331654 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.334828 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.335247 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.335281 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.335494 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.335721 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.335916 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.336144 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:18.418445 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:43:18.418536 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:43:18.442720 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:43:18.442802 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:43:18.467289 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:43:18.467385 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 22:43:18.494185 1194386 provision.go:87] duration metric: took 308.563329ms to configureAuth
	I0731 22:43:18.494219 1194386 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:43:18.494487 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:43:18.494604 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.497605 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.497948 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.497970 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.498219 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.498435 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.498614 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.498736 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.498905 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:18.499094 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:18.499114 1194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:43:18.762164 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:43:18.762196 1194386 main.go:141] libmachine: Checking connection to Docker...
	I0731 22:43:18.762204 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetURL
	I0731 22:43:18.763559 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Using libvirt version 6000000
	I0731 22:43:18.765738 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.766055 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.766090 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.766227 1194386 main.go:141] libmachine: Docker is up and running!
	I0731 22:43:18.766244 1194386 main.go:141] libmachine: Reticulating splines...
	I0731 22:43:18.766251 1194386 client.go:171] duration metric: took 24.058995248s to LocalClient.Create
	I0731 22:43:18.766272 1194386 start.go:167] duration metric: took 24.059065044s to libmachine.API.Create "ha-150891"
	I0731 22:43:18.766282 1194386 start.go:293] postStartSetup for "ha-150891-m03" (driver="kvm2")
	I0731 22:43:18.766294 1194386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:43:18.766312 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:18.766578 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:43:18.766602 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.768838 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.769209 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.769235 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.769376 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.769567 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.769722 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.769868 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:18.850737 1194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:43:18.855138 1194386 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:43:18.855176 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:43:18.855259 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:43:18.855362 1194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:43:18.855375 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:43:18.855486 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:43:18.865095 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:43:18.890674 1194386 start.go:296] duration metric: took 124.375062ms for postStartSetup
	I0731 22:43:18.890749 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetConfigRaw
	I0731 22:43:18.891459 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:18.894646 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.895057 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.895090 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.895394 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:43:18.895629 1194386 start.go:128] duration metric: took 24.207820708s to createHost
	I0731 22:43:18.895656 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.898870 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.899257 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.899290 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.899499 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.899794 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.899971 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.900148 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.900324 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:18.900533 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:18.900544 1194386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:43:19.008729 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465798.983381385
	
	I0731 22:43:19.008761 1194386 fix.go:216] guest clock: 1722465798.983381385
	I0731 22:43:19.008772 1194386 fix.go:229] Guest: 2024-07-31 22:43:18.983381385 +0000 UTC Remote: 2024-07-31 22:43:18.895642 +0000 UTC m=+158.430426748 (delta=87.739385ms)
	I0731 22:43:19.008796 1194386 fix.go:200] guest clock delta is within tolerance: 87.739385ms
	I0731 22:43:19.008811 1194386 start.go:83] releasing machines lock for "ha-150891-m03", held for 24.321114914s
	I0731 22:43:19.008834 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.009144 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:19.011897 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.012288 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:19.012319 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.014695 1194386 out.go:177] * Found network options:
	I0731 22:43:19.016080 1194386 out.go:177]   - NO_PROXY=192.168.39.105,192.168.39.224
	W0731 22:43:19.017320 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:43:19.017344 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:43:19.017364 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.018037 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.018268 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.018380 1194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:43:19.018425 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	W0731 22:43:19.018460 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:43:19.018498 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:43:19.018571 1194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:43:19.018595 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:19.021532 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.021728 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.022029 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:19.022061 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.022222 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:19.022243 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.022290 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:19.022408 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:19.022507 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:19.022611 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:19.022661 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:19.022761 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:19.022837 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:19.022862 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:19.252053 1194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:43:19.258228 1194386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:43:19.258318 1194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:43:19.274777 1194386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:43:19.274807 1194386 start.go:495] detecting cgroup driver to use...
	I0731 22:43:19.274879 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:43:19.291751 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:43:19.307383 1194386 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:43:19.307457 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:43:19.322719 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:43:19.337567 1194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:43:19.457968 1194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:43:19.629095 1194386 docker.go:233] disabling docker service ...
	I0731 22:43:19.629167 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:43:19.647627 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:43:19.660580 1194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:43:19.779952 1194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:43:19.892979 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:43:19.908391 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:43:19.926742 1194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:43:19.926806 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.938918 1194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:43:19.938989 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.950401 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.962124 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.972986 1194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:43:19.984219 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.995444 1194386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:20.014921 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:20.026727 1194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:43:20.037116 1194386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 22:43:20.037185 1194386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 22:43:20.050003 1194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:43:20.060866 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:43:20.175020 1194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:43:20.309613 1194386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:43:20.309718 1194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:43:20.314499 1194386 start.go:563] Will wait 60s for crictl version
	I0731 22:43:20.314571 1194386 ssh_runner.go:195] Run: which crictl
	I0731 22:43:20.319563 1194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:43:20.361170 1194386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:43:20.361273 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:43:20.391549 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:43:20.422842 1194386 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:43:20.424428 1194386 out.go:177]   - env NO_PROXY=192.168.39.105
	I0731 22:43:20.426139 1194386 out.go:177]   - env NO_PROXY=192.168.39.105,192.168.39.224
	I0731 22:43:20.427240 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:20.430108 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:20.430537 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:20.430561 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:20.430835 1194386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:43:20.435079 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:43:20.447675 1194386 mustload.go:65] Loading cluster: ha-150891
	I0731 22:43:20.447955 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:43:20.448323 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:43:20.448374 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:43:20.464739 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0731 22:43:20.465283 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:43:20.465862 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:43:20.465890 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:43:20.466208 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:43:20.466502 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:43:20.468414 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:43:20.468753 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:43:20.468799 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:43:20.485333 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0731 22:43:20.485778 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:43:20.486311 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:43:20.486338 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:43:20.486680 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:43:20.486882 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:43:20.487060 1194386 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.241
	I0731 22:43:20.487070 1194386 certs.go:194] generating shared ca certs ...
	I0731 22:43:20.487086 1194386 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:43:20.487226 1194386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:43:20.487292 1194386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:43:20.487306 1194386 certs.go:256] generating profile certs ...
	I0731 22:43:20.487389 1194386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:43:20.487425 1194386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe
	I0731 22:43:20.487451 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.224 192.168.39.241 192.168.39.254]
	I0731 22:43:20.555181 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe ...
	I0731 22:43:20.555219 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe: {Name:mkc8b2401f2f9f966b15bd390172fe6b11839037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:43:20.555423 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe ...
	I0731 22:43:20.555442 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe: {Name:mk1efed90e04277ecee2ba1c415a4310493e916e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:43:20.555545 1194386 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:43:20.555702 1194386 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:43:20.555866 1194386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:43:20.555885 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:43:20.555905 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:43:20.555929 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:43:20.555950 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:43:20.555968 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:43:20.555987 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:43:20.556004 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:43:20.556022 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:43:20.556109 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:43:20.556162 1194386 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:43:20.556176 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:43:20.556211 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:43:20.556244 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:43:20.556278 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:43:20.556331 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:43:20.556376 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:43:20.556397 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:20.556415 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:43:20.556460 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:43:20.559798 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:20.560204 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:43:20.560220 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:20.560434 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:43:20.560647 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:43:20.560822 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:43:20.560929 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:43:20.636520 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:43:20.642300 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:43:20.653070 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:43:20.657085 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 22:43:20.668817 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:43:20.673108 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:43:20.683662 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:43:20.687852 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 22:43:20.699764 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:43:20.704447 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:43:20.716290 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:43:20.720294 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:43:20.731101 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:43:20.755522 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:43:20.781270 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:43:20.805155 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:43:20.829180 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 22:43:20.852764 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:43:20.877560 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:43:20.902249 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:43:20.926291 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:43:20.949801 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:43:20.974289 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:43:21.000494 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:43:21.018051 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 22:43:21.034844 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:43:21.052916 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 22:43:21.071129 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:43:21.091925 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:43:21.108701 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:43:21.126834 1194386 ssh_runner.go:195] Run: openssl version
	I0731 22:43:21.132603 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:43:21.144575 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:43:21.149631 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:43:21.149706 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:43:21.155740 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:43:21.167550 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:43:21.178551 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:21.183482 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:21.183582 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:21.189616 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:43:21.200674 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:43:21.212546 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:43:21.217364 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:43:21.217442 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:43:21.223342 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:43:21.234958 1194386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:43:21.239398 1194386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:43:21.239485 1194386 kubeadm.go:934] updating node {m03 192.168.39.241 8443 v1.30.3 crio true true} ...
	I0731 22:43:21.239601 1194386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:43:21.239638 1194386 kube-vip.go:115] generating kube-vip config ...
	I0731 22:43:21.239703 1194386 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:43:21.255986 1194386 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:43:21.256074 1194386 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:43:21.256168 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:43:21.267553 1194386 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:43:21.267610 1194386 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:43:21.277968 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 22:43:21.278038 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:43:21.277973 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 22:43:21.277973 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 22:43:21.278126 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:43:21.278129 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:43:21.278224 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:43:21.278225 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:43:21.292542 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:43:21.292652 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:43:21.292670 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:43:21.292693 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:43:21.292694 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:43:21.292664 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:43:21.308756 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:43:21.308794 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:43:22.264318 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:43:22.274578 1194386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:43:22.291970 1194386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:43:22.309449 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 22:43:22.327474 1194386 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:43:22.331734 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:43:22.345065 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:43:22.492236 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:43:22.510006 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:43:22.510433 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:43:22.510488 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:43:22.527382 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0731 22:43:22.527849 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:43:22.528391 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:43:22.528420 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:43:22.528828 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:43:22.529059 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:43:22.529249 1194386 start.go:317] joinCluster: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:43:22.529422 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:43:22.529444 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:43:22.532291 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:22.532844 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:43:22.532872 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:22.533030 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:43:22.533238 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:43:22.533430 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:43:22.533609 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:43:22.695856 1194386 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:43:22.695917 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yn92gg.uccsz8l2wa3z9w2v --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0731 22:43:45.488902 1194386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yn92gg.uccsz8l2wa3z9w2v --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (22.792955755s)
	I0731 22:43:45.488953 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:43:45.954644 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-150891-m03 minikube.k8s.io/updated_at=2024_07_31T22_43_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-150891 minikube.k8s.io/primary=false
	I0731 22:43:46.072646 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-150891-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:43:46.176653 1194386 start.go:319] duration metric: took 23.647404089s to joinCluster
	I0731 22:43:46.176776 1194386 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:43:46.177133 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:43:46.178084 1194386 out.go:177] * Verifying Kubernetes components...
	I0731 22:43:46.179301 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:43:46.386899 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:43:46.414585 1194386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:43:46.414819 1194386 kapi.go:59] client config for ha-150891: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:43:46.414897 1194386 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.105:8443
	I0731 22:43:46.415131 1194386 node_ready.go:35] waiting up to 6m0s for node "ha-150891-m03" to be "Ready" ...
	I0731 22:43:46.415223 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:46.415231 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:46.415238 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:46.415242 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:46.418595 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:46.915567 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:46.915592 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:46.915601 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:46.915606 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:46.919505 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:47.416036 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:47.416060 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:47.416068 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:47.416081 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:47.419801 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:47.916073 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:47.916120 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:47.916133 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:47.916140 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:47.920309 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:48.416297 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:48.416320 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:48.416329 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:48.416333 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:48.420161 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:48.420770 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:48.915740 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:48.915772 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:48.915785 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:48.915793 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:48.919878 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:49.416195 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:49.416239 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:49.416249 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:49.416252 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:49.420198 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:49.915741 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:49.915775 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:49.915786 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:49.915794 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:49.919488 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:50.415458 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:50.415486 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:50.415494 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:50.415499 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:50.419037 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:50.915410 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:50.915438 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:50.915446 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:50.915451 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:50.919270 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:50.919705 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:51.416222 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:51.416251 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:51.416263 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:51.416268 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:51.420074 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:51.915844 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:51.915876 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:51.915888 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:51.915893 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:51.919367 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:52.415794 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:52.415879 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:52.415906 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:52.415914 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:52.423284 1194386 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:43:52.916224 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:52.916248 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:52.916258 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:52.916262 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:52.919768 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:52.920304 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:53.415521 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:53.415547 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:53.415556 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:53.415559 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:53.418678 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:53.915435 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:53.915465 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:53.915473 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:53.915478 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:53.918908 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:54.415998 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:54.416024 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:54.416033 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:54.416037 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:54.419295 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:54.915916 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:54.915940 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:54.915949 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:54.915953 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:54.919873 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:54.920481 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:55.415757 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:55.415791 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:55.415801 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:55.415806 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:55.419361 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:55.915668 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:55.915694 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:55.915702 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:55.915706 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:55.919284 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:56.415352 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:56.415381 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:56.415391 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:56.415396 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:56.418994 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:56.915814 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:56.915853 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:56.915865 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:56.915872 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:56.920083 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:56.921114 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:57.416047 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:57.416079 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:57.416111 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:57.416117 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:57.419701 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:57.916292 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:57.916317 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:57.916326 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:57.916330 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:57.919935 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:58.415824 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:58.415852 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:58.415862 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:58.415867 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:58.419822 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:58.916033 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:58.916059 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:58.916067 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:58.916071 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:58.919588 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:59.415798 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:59.415830 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:59.415842 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:59.415848 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:59.420196 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:59.420792 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:59.916347 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:59.916372 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:59.916381 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:59.916384 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:59.919682 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:00.415444 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:00.415471 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:00.415480 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:00.415483 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:00.418943 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:00.916163 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:00.916190 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:00.916198 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:00.916202 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:00.919264 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:01.416255 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:01.416279 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:01.416288 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:01.416293 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:01.419698 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:01.915625 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:01.915665 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:01.915678 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:01.915685 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:01.919543 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:01.920013 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:44:02.415872 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:02.415899 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:02.415910 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:02.415915 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:02.419572 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:02.915938 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:02.915962 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:02.915970 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:02.915974 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:02.919715 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.415631 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:03.415659 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.415668 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.415675 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.419144 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.419712 1194386 node_ready.go:49] node "ha-150891-m03" has status "Ready":"True"
	I0731 22:44:03.419733 1194386 node_ready.go:38] duration metric: took 17.004587794s for node "ha-150891-m03" to be "Ready" ...
	I0731 22:44:03.419743 1194386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:44:03.419830 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:03.419840 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.419847 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.419852 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.426803 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:44:03.434683 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.434816 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4928n
	I0731 22:44:03.434827 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.434839 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.434849 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.438280 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.439024 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:03.439044 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.439057 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.439064 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.443273 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:44:03.443765 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.443784 1194386 pod_ready.go:81] duration metric: took 9.066139ms for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.443795 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.443877 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rkd4j
	I0731 22:44:03.443887 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.443895 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.443899 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.446490 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.447585 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:03.447626 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.447638 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.447644 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.450878 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.451331 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.451351 1194386 pod_ready.go:81] duration metric: took 7.548977ms for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.451361 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.451415 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891
	I0731 22:44:03.451423 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.451430 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.451433 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.454342 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.454921 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:03.454939 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.454947 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.454952 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.457911 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.458350 1194386 pod_ready.go:92] pod "etcd-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.458374 1194386 pod_ready.go:81] duration metric: took 7.005484ms for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.458388 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.458462 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m02
	I0731 22:44:03.458472 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.458485 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.458504 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.461397 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.461927 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:03.461943 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.461952 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.461958 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.464805 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.465353 1194386 pod_ready.go:92] pod "etcd-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.465379 1194386 pod_ready.go:81] duration metric: took 6.978907ms for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.465392 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.616682 1194386 request.go:629] Waited for 151.195905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m03
	I0731 22:44:03.616746 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m03
	I0731 22:44:03.616753 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.616763 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.616769 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.620625 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.816614 1194386 request.go:629] Waited for 195.444036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:03.816704 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:03.816711 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.816721 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.816731 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.820355 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.821203 1194386 pod_ready.go:92] pod "etcd-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.821227 1194386 pod_ready.go:81] duration metric: took 355.826856ms for pod "etcd-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.821251 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.016613 1194386 request.go:629] Waited for 195.26955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891
	I0731 22:44:04.016711 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891
	I0731 22:44:04.016718 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.016729 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.016738 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.020320 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.216469 1194386 request.go:629] Waited for 195.383981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:04.216577 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:04.216588 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.216602 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.216611 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.219872 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.220488 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:04.220511 1194386 pod_ready.go:81] duration metric: took 399.24917ms for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.220522 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.416625 1194386 request.go:629] Waited for 196.005775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:44:04.416691 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:44:04.416697 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.416705 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.416712 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.419947 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.616576 1194386 request.go:629] Waited for 195.788726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:04.616662 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:04.616668 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.616676 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.616684 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.619902 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.620491 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:04.620519 1194386 pod_ready.go:81] duration metric: took 399.987689ms for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.620534 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.815902 1194386 request.go:629] Waited for 195.285802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m03
	I0731 22:44:04.816002 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m03
	I0731 22:44:04.816012 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.816020 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.816026 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.819509 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.016619 1194386 request.go:629] Waited for 196.368245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:05.016702 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:05.016714 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.016726 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.016738 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.020239 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.020664 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:05.020686 1194386 pod_ready.go:81] duration metric: took 400.145516ms for pod "kube-apiserver-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.020696 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.215840 1194386 request.go:629] Waited for 195.070368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:44:05.215907 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:44:05.215913 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.215921 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.215925 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.219501 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.415990 1194386 request.go:629] Waited for 195.397538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:05.416076 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:05.416083 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.416115 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.416121 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.419718 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.420477 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:05.420502 1194386 pod_ready.go:81] duration metric: took 399.798279ms for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.420514 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.616245 1194386 request.go:629] Waited for 195.615583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:44:05.616335 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:44:05.616346 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.616359 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.616366 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.620138 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.816454 1194386 request.go:629] Waited for 195.4864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:05.816551 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:05.816559 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.816570 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.816581 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.819761 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.820249 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:05.820268 1194386 pod_ready.go:81] duration metric: took 399.747549ms for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.820280 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.016444 1194386 request.go:629] Waited for 196.063578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m03
	I0731 22:44:06.016523 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m03
	I0731 22:44:06.016529 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.016536 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.016540 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.019960 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.216177 1194386 request.go:629] Waited for 195.238135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:06.216267 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:06.216274 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.216284 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.216292 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.219535 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.220036 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:06.220058 1194386 pod_ready.go:81] duration metric: took 399.769239ms for pod "kube-controller-manager-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.220068 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.416428 1194386 request.go:629] Waited for 196.255398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:44:06.416515 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:44:06.416523 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.416538 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.416546 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.419896 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.615996 1194386 request.go:629] Waited for 195.374732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:06.616082 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:06.616104 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.616116 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.616123 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.619394 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.619930 1194386 pod_ready.go:92] pod "kube-proxy-9xcss" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:06.619953 1194386 pod_ready.go:81] duration metric: took 399.876714ms for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.619963 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-df4cg" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.815906 1194386 request.go:629] Waited for 195.838575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-df4cg
	I0731 22:44:06.815984 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-df4cg
	I0731 22:44:06.815991 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.816000 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.816005 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.819817 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.015762 1194386 request.go:629] Waited for 195.267194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:07.015872 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:07.015880 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.015892 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.015900 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.019123 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.019705 1194386 pod_ready.go:92] pod "kube-proxy-df4cg" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:07.019727 1194386 pod_ready.go:81] duration metric: took 399.756233ms for pod "kube-proxy-df4cg" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.019740 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.215766 1194386 request.go:629] Waited for 195.926306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:44:07.215861 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:44:07.215868 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.215876 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.215883 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.219202 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.416248 1194386 request.go:629] Waited for 196.380568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:07.416317 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:07.416325 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.416335 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.416341 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.419642 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.420146 1194386 pod_ready.go:92] pod "kube-proxy-nmkp9" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:07.420167 1194386 pod_ready.go:81] duration metric: took 400.416252ms for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.420177 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.615750 1194386 request.go:629] Waited for 195.478503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:44:07.615834 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:44:07.615841 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.615849 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.615854 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.619533 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.815694 1194386 request.go:629] Waited for 195.291759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:07.815762 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:07.815767 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.815775 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.815779 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.819412 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.820007 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:07.820027 1194386 pod_ready.go:81] duration metric: took 399.844665ms for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.820037 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.016209 1194386 request.go:629] Waited for 196.070733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:44:08.016289 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:44:08.016294 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.016304 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.016312 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.019423 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.216329 1194386 request.go:629] Waited for 196.370784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:08.216394 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:08.216400 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.216409 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.216414 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.219840 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.220324 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:08.220347 1194386 pod_ready.go:81] duration metric: took 400.303486ms for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.220356 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.416442 1194386 request.go:629] Waited for 195.994731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m03
	I0731 22:44:08.416537 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m03
	I0731 22:44:08.416543 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.416552 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.416556 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.419743 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.616709 1194386 request.go:629] Waited for 196.377943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:08.616809 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:08.616819 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.616829 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.616836 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.620591 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.621093 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:08.621115 1194386 pod_ready.go:81] duration metric: took 400.752015ms for pod "kube-scheduler-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.621126 1194386 pod_ready.go:38] duration metric: took 5.201372685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:44:08.621142 1194386 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:44:08.621199 1194386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:44:08.635926 1194386 api_server.go:72] duration metric: took 22.459091752s to wait for apiserver process to appear ...
	I0731 22:44:08.635955 1194386 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:44:08.635990 1194386 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0731 22:44:08.642616 1194386 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0731 22:44:08.642793 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/version
	I0731 22:44:08.642809 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.642821 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.642832 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.643767 1194386 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 22:44:08.643854 1194386 api_server.go:141] control plane version: v1.30.3
	I0731 22:44:08.643874 1194386 api_server.go:131] duration metric: took 7.911396ms to wait for apiserver health ...
	I0731 22:44:08.643888 1194386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:44:08.816342 1194386 request.go:629] Waited for 172.346114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:08.816418 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:08.816430 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.816441 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.816450 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.822997 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:44:08.830767 1194386 system_pods.go:59] 24 kube-system pods found
	I0731 22:44:08.830803 1194386 system_pods.go:61] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:44:08.830808 1194386 system_pods.go:61] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:44:08.830812 1194386 system_pods.go:61] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:44:08.830816 1194386 system_pods.go:61] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:44:08.830819 1194386 system_pods.go:61] "etcd-ha-150891-m03" [d320cf0e-70df-42ce-8a71-b103ab62c498] Running
	I0731 22:44:08.830822 1194386 system_pods.go:61] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:44:08.830825 1194386 system_pods.go:61] "kindnet-8bkwq" [9d1ea907-d2a6-44ae-8a18-86686b21c2e6] Running
	I0731 22:44:08.830827 1194386 system_pods.go:61] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:44:08.830830 1194386 system_pods.go:61] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:44:08.830833 1194386 system_pods.go:61] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:44:08.830836 1194386 system_pods.go:61] "kube-apiserver-ha-150891-m03" [4dc100af-e2cd-4af9-a377-8486ba372988] Running
	I0731 22:44:08.830840 1194386 system_pods.go:61] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:44:08.830843 1194386 system_pods.go:61] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:44:08.830846 1194386 system_pods.go:61] "kube-controller-manager-ha-150891-m03" [f38150d3-c750-45fa-ba87-cd66a1d1bf4d] Running
	I0731 22:44:08.830849 1194386 system_pods.go:61] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:44:08.830853 1194386 system_pods.go:61] "kube-proxy-df4cg" [f225450d-1ebe-4a97-af4d-73edfb092291] Running
	I0731 22:44:08.830855 1194386 system_pods.go:61] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:44:08.830859 1194386 system_pods.go:61] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:44:08.830865 1194386 system_pods.go:61] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:44:08.830868 1194386 system_pods.go:61] "kube-scheduler-ha-150891-m03" [3c5e191f-b66b-4d95-bcdf-cf765eec91f8] Running
	I0731 22:44:08.830871 1194386 system_pods.go:61] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:44:08.830874 1194386 system_pods.go:61] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:44:08.830877 1194386 system_pods.go:61] "kube-vip-ha-150891-m03" [14435fd1-a3ab-4ca7-a5fe-3ed449a44aa2] Running
	I0731 22:44:08.830880 1194386 system_pods.go:61] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:44:08.830887 1194386 system_pods.go:74] duration metric: took 186.991142ms to wait for pod list to return data ...
	I0731 22:44:08.830898 1194386 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:44:09.016334 1194386 request.go:629] Waited for 185.355154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:44:09.016408 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:44:09.016415 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:09.016425 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:09.016429 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:09.020097 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:09.020256 1194386 default_sa.go:45] found service account: "default"
	I0731 22:44:09.020275 1194386 default_sa.go:55] duration metric: took 189.367438ms for default service account to be created ...
	I0731 22:44:09.020288 1194386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:44:09.215697 1194386 request.go:629] Waited for 195.304297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:09.215777 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:09.215784 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:09.215795 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:09.215803 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:09.221974 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:44:09.228258 1194386 system_pods.go:86] 24 kube-system pods found
	I0731 22:44:09.228293 1194386 system_pods.go:89] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:44:09.228299 1194386 system_pods.go:89] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:44:09.228306 1194386 system_pods.go:89] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:44:09.228311 1194386 system_pods.go:89] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:44:09.228315 1194386 system_pods.go:89] "etcd-ha-150891-m03" [d320cf0e-70df-42ce-8a71-b103ab62c498] Running
	I0731 22:44:09.228319 1194386 system_pods.go:89] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:44:09.228322 1194386 system_pods.go:89] "kindnet-8bkwq" [9d1ea907-d2a6-44ae-8a18-86686b21c2e6] Running
	I0731 22:44:09.228327 1194386 system_pods.go:89] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:44:09.228331 1194386 system_pods.go:89] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:44:09.228335 1194386 system_pods.go:89] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:44:09.228339 1194386 system_pods.go:89] "kube-apiserver-ha-150891-m03" [4dc100af-e2cd-4af9-a377-8486ba372988] Running
	I0731 22:44:09.228344 1194386 system_pods.go:89] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:44:09.228349 1194386 system_pods.go:89] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:44:09.228353 1194386 system_pods.go:89] "kube-controller-manager-ha-150891-m03" [f38150d3-c750-45fa-ba87-cd66a1d1bf4d] Running
	I0731 22:44:09.228359 1194386 system_pods.go:89] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:44:09.228364 1194386 system_pods.go:89] "kube-proxy-df4cg" [f225450d-1ebe-4a97-af4d-73edfb092291] Running
	I0731 22:44:09.228367 1194386 system_pods.go:89] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:44:09.228371 1194386 system_pods.go:89] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:44:09.228375 1194386 system_pods.go:89] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:44:09.228379 1194386 system_pods.go:89] "kube-scheduler-ha-150891-m03" [3c5e191f-b66b-4d95-bcdf-cf765eec91f8] Running
	I0731 22:44:09.228386 1194386 system_pods.go:89] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:44:09.228390 1194386 system_pods.go:89] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:44:09.228396 1194386 system_pods.go:89] "kube-vip-ha-150891-m03" [14435fd1-a3ab-4ca7-a5fe-3ed449a44aa2] Running
	I0731 22:44:09.228400 1194386 system_pods.go:89] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:44:09.228415 1194386 system_pods.go:126] duration metric: took 208.121505ms to wait for k8s-apps to be running ...
	I0731 22:44:09.228424 1194386 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:44:09.228489 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:44:09.244628 1194386 system_svc.go:56] duration metric: took 16.191245ms WaitForService to wait for kubelet
	I0731 22:44:09.244664 1194386 kubeadm.go:582] duration metric: took 23.06783414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:44:09.244691 1194386 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:44:09.416209 1194386 request.go:629] Waited for 171.4086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes
	I0731 22:44:09.416274 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes
	I0731 22:44:09.416279 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:09.416288 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:09.416292 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:09.419797 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:09.421056 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:44:09.421082 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:44:09.421096 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:44:09.421101 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:44:09.421109 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:44:09.421115 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:44:09.421121 1194386 node_conditions.go:105] duration metric: took 176.424174ms to run NodePressure ...
	I0731 22:44:09.421141 1194386 start.go:241] waiting for startup goroutines ...
	I0731 22:44:09.421167 1194386 start.go:255] writing updated cluster config ...
	I0731 22:44:09.421576 1194386 ssh_runner.go:195] Run: rm -f paused
	I0731 22:44:09.476792 1194386 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 22:44:09.478929 1194386 out.go:177] * Done! kubectl is now configured to use "ha-150891" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.208291143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466066208269753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a415e73b-b1cf-4990-9be0-29e0dfccabba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.208847514Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1444117a-f140-4cd8-92a3-c86454e06a99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.208899021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1444117a-f140-4cd8-92a3-c86454e06a99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.209139707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1444117a-f140-4cd8-92a3-c86454e06a99 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.249739648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e2eaa19-a190-4d9e-98d0-1f836927ded1 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.249838015Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e2eaa19-a190-4d9e-98d0-1f836927ded1 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.251518787Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1423863-0b2d-4b72-b74a-c8cd4625e358 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.252336043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466066252304294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1423863-0b2d-4b72-b74a-c8cd4625e358 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.253179796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=938212dc-9b7a-4d9c-83ea-988977704bd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.253251321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=938212dc-9b7a-4d9c-83ea-988977704bd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.253594099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=938212dc-9b7a-4d9c-83ea-988977704bd2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.297883546Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=833546ab-279c-4aec-bede-ebab190098ed name=/runtime.v1.RuntimeService/Version
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.297976646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=833546ab-279c-4aec-bede-ebab190098ed name=/runtime.v1.RuntimeService/Version
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.299330520Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75a66b2c-368c-4ab0-9c25-d46d93046510 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.300265444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466066300229608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75a66b2c-368c-4ab0-9c25-d46d93046510 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.301095458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7476e5f8-e18d-4ddf-9d3c-1c23725554c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.301204371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7476e5f8-e18d-4ddf-9d3c-1c23725554c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.301640941Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7476e5f8-e18d-4ddf-9d3c-1c23725554c3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.337294750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f53bc161-8712-4568-90fd-787b46b7a048 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.337376638Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f53bc161-8712-4568-90fd-787b46b7a048 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.338421871Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e96d8668-2fae-468a-b8a2-5c52cbb379e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.339029593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466066339006672,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e96d8668-2fae-468a-b8a2-5c52cbb379e8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.339846945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1616ef57-4f9c-4a78-a7c4-42c493e84e2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.339934426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1616ef57-4f9c-4a78-a7c4-42c493e84e2f name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:47:46 ha-150891 crio[676]: time="2024-07-31 22:47:46.340759023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1616ef57-4f9c-4a78-a7c4-42c493e84e2f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17bbba80074e2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   23ff00497365e       busybox-fc5497c4f-98526
	6c2d6faeccb11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   60acb98d73509       coredns-7db6d8ff4d-4928n
	e3efb8efde2a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   c95c974d43c02       storage-provisioner
	569d471778fea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   911e886f5312d       coredns-7db6d8ff4d-rkd4j
	6800ea54157a1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   de805f7545942       kindnet-4qn8c
	45f49431a7774       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   af4274f85760c       kube-proxy-9xcss
	8ab90b2c667e4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   07b91077c5b52       kube-vip-ha-150891
	8ae0e6eb6658d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   b43fbf7a4a548       kube-apiserver-ha-150891
	92f65fc372a62       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   39fb7cbb2c199       kube-controller-manager-ha-150891
	31a5692b683c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   148244b8abdde       etcd-ha-150891
	c5a522e53c2bc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   015145f976eb6       kube-scheduler-ha-150891
	
	
	==> coredns [569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2] <==
	[INFO] 10.244.2.2:39965 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000115385s
	[INFO] 10.244.0.4:53269 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00369728s
	[INFO] 10.244.0.4:36211 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115481s
	[INFO] 10.244.0.4:59572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163023s
	[INFO] 10.244.0.4:35175 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158586s
	[INFO] 10.244.1.2:33021 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180449s
	[INFO] 10.244.1.2:54691 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080124s
	[INFO] 10.244.1.2:59380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104324s
	[INFO] 10.244.2.2:46771 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088924s
	[INFO] 10.244.2.2:51063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242769s
	[INFO] 10.244.2.2:49935 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074586s
	[INFO] 10.244.0.4:56290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010407s
	[INFO] 10.244.0.4:57803 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109451s
	[INFO] 10.244.1.2:53651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133423s
	[INFO] 10.244.1.2:54989 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149762s
	[INFO] 10.244.1.2:55181 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079999s
	[INFO] 10.244.1.2:45949 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096277s
	[INFO] 10.244.2.2:38998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160565s
	[INFO] 10.244.2.2:55687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080958s
	[INFO] 10.244.0.4:36222 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152278s
	[INFO] 10.244.0.4:55182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115569s
	[INFO] 10.244.0.4:40749 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099022s
	[INFO] 10.244.1.2:42636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134944s
	[INFO] 10.244.1.2:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091957s
	[INFO] 10.244.1.2:39878 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081213s
	
	
	==> coredns [6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811] <==
	[INFO] 10.244.2.2:44462 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001610027s
	[INFO] 10.244.0.4:37392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001106s
	[INFO] 10.244.0.4:45747 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144796s
	[INFO] 10.244.0.4:48856 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004798514s
	[INFO] 10.244.0.4:44718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011559s
	[INFO] 10.244.1.2:39166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153589s
	[INFO] 10.244.1.2:53738 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171146s
	[INFO] 10.244.1.2:53169 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192547s
	[INFO] 10.244.1.2:46534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001207677s
	[INFO] 10.244.1.2:40987 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092132s
	[INFO] 10.244.2.2:51004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179521s
	[INFO] 10.244.2.2:44618 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670196s
	[INFO] 10.244.2.2:34831 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094811s
	[INFO] 10.244.2.2:49392 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285273s
	[INFO] 10.244.2.2:44694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111378s
	[INFO] 10.244.0.4:58491 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160933s
	[INFO] 10.244.0.4:44490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217734s
	[INFO] 10.244.2.2:53960 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106212s
	[INFO] 10.244.2.2:47661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161869s
	[INFO] 10.244.0.4:43273 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101944s
	[INFO] 10.244.1.2:54182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187102s
	[INFO] 10.244.2.2:60067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151741s
	[INFO] 10.244.2.2:49034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160035s
	[INFO] 10.244.2.2:49392 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096218s
	[INFO] 10.244.2.2:59220 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129048s
	
	
	==> describe nodes <==
	Name:               ha-150891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_41_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:47:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-150891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a8ca2005fa042d7a84b5199ab2c7a15
	  System UUID:                6a8ca200-5fa0-42d7-a84b-5199ab2c7a15
	  Boot ID:                    2ffe06f6-f7c0-4945-b70b-2276f3221b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-98526              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-7db6d8ff4d-4928n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 coredns-7db6d8ff4d-rkd4j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 etcd-ha-150891                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m23s
	  kube-system                 kindnet-4qn8c                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-150891             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 kube-controller-manager-ha-150891    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-proxy-9xcss                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-150891             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m23s
	  kube-system                 kube-vip-ha-150891                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m9s   kube-proxy       
	  Normal  Starting                 6m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m23s  kubelet          Node ha-150891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s  kubelet          Node ha-150891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s  kubelet          Node ha-150891 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m10s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal  NodeReady                5m55s  kubelet          Node ha-150891 status is now: NodeReady
	  Normal  RegisteredNode           4m59s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal  RegisteredNode           3m47s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	
	
	Name:               ha-150891-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_42_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:42:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:45:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-150891-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1608b7369bb468b8c8c5013f81b09bb
	  System UUID:                c1608b73-69bb-468b-8c8c-5013f81b09bb
	  Boot ID:                    8dafe8a2-11cb-4840-b6a7-75e519b66bfd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cwsjc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-150891-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m14s
	  kube-system                 kindnet-bz2j7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m16s
	  kube-system                 kube-apiserver-ha-150891-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-controller-manager-ha-150891-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-proxy-nmkp9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-scheduler-ha-150891-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	  kube-system                 kube-vip-ha-150891-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m17s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m17s)  kubelet          Node ha-150891-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m17s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           4m59s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           3m47s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-150891-m02 status is now: NodeNotReady
	
	
	Name:               ha-150891-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_43_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:43:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:47:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:43:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:43:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:43:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:44:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-150891-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55f48101720847269fc5703e686b1c56
	  System UUID:                55f48101-7208-4726-9fc5-703e686b1c56
	  Boot ID:                    81b14277-4c6c-4d69-82a6-40f099138a1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gzb99                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-150891-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m2s
	  kube-system                 kindnet-8bkwq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m4s
	  kube-system                 kube-apiserver-ha-150891-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-controller-manager-ha-150891-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-proxy-df4cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-ha-150891-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-vip-ha-150891-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-150891-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m                   node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal  RegisteredNode           3m47s                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	
	
	Name:               ha-150891-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_44_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:44:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:47:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:44:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:44:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:44:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:45:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-150891-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdcf2d763364b5cbf54f471f1e49c03
	  System UUID:                7bdcf2d7-6336-4b5c-bf54-f471f1e49c03
	  Boot ID:                    c717e92b-7c1b-482a-898b-ac9f84a2f188
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4ghcd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m
	  kube-system                 kube-proxy-l8srs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-150891-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m                   node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal  NodeAllocatableEnforced  3m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-150891-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 22:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048173] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036734] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.718655] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876549] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.547584] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 22:41] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.059402] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055698] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.187489] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.128918] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.269933] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.169130] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.879571] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.061597] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.693408] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.081387] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.056574] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.292402] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 22:42] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8] <==
	{"level":"warn","ts":"2024-07-31T22:47:46.285902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.329465Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.590097Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.603779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.612424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.619314Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.623417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.624988Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.627019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.628684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.635385Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.641141Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.647202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.651263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.652239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.656265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.666599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.673838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.679889Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.683844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.687409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.693375Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.700118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.707851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:47:46.729586Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:47:46 up 7 min,  0 users,  load average: 0.40, 0.32, 0.15
	Linux ha-150891 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f] <==
	I0731 22:47:11.248031       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:47:21.241394       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:47:21.241427       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:47:21.241568       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:47:21.241592       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:47:21.241641       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:47:21.241647       1 main.go:299] handling current node
	I0731 22:47:21.241658       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:47:21.241663       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:47:31.244822       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:47:31.244917       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:47:31.245108       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:47:31.245383       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:47:31.245582       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:47:31.245618       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:47:31.245685       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:47:31.245805       1 main.go:299] handling current node
	I0731 22:47:41.239791       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:47:41.241080       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:47:41.241901       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:47:41.241976       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:47:41.242090       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:47:41.243508       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:47:41.243674       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:47:41.243812       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5] <==
	W0731 22:41:21.674980       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105]
	I0731 22:41:21.676103       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 22:41:21.681570       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 22:41:21.776540       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 22:41:23.058039       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 22:41:23.086959       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 22:41:23.107753       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 22:41:36.281096       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 22:41:36.291653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0731 22:44:15.245172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35480: use of closed network connection
	E0731 22:44:15.440060       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35500: use of closed network connection
	E0731 22:44:15.633671       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35518: use of closed network connection
	E0731 22:44:15.834481       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35548: use of closed network connection
	E0731 22:44:16.021022       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35572: use of closed network connection
	E0731 22:44:16.212498       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35596: use of closed network connection
	E0731 22:44:16.390433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35606: use of closed network connection
	E0731 22:44:16.576806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35620: use of closed network connection
	E0731 22:44:16.767322       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35632: use of closed network connection
	E0731 22:44:17.064161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35668: use of closed network connection
	E0731 22:44:17.243491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35696: use of closed network connection
	E0731 22:44:17.426959       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35708: use of closed network connection
	E0731 22:44:17.608016       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35726: use of closed network connection
	E0731 22:44:17.782924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35746: use of closed network connection
	E0731 22:44:17.965215       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35762: use of closed network connection
	W0731 22:45:41.685846       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.241]
	
	
	==> kube-controller-manager [92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86] <==
	I0731 22:43:46.262884       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m03"
	I0731 22:44:10.374961       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.920507ms"
	I0731 22:44:10.423976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.943673ms"
	I0731 22:44:10.598390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="174.303069ms"
	I0731 22:44:10.688164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.667837ms"
	I0731 22:44:10.712141       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.907255ms"
	I0731 22:44:10.712544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.798µs"
	I0731 22:44:10.758206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.471521ms"
	I0731 22:44:10.758423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.491µs"
	I0731 22:44:12.205295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.069µs"
	I0731 22:44:13.011052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.806µs"
	I0731 22:44:13.479543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.911469ms"
	I0731 22:44:13.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.095µs"
	I0731 22:44:13.541457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.954691ms"
	I0731 22:44:13.542515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.112µs"
	I0731 22:44:14.698025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.750172ms"
	I0731 22:44:14.698160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.42µs"
	E0731 22:44:45.863658       1 certificate_controller.go:146] Sync csr-bsh6r failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-bsh6r": the object has been modified; please apply your changes to the latest version and try again
	I0731 22:44:46.107072       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-150891-m04\" does not exist"
	I0731 22:44:46.171999       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-150891-m04" podCIDRs=["10.244.3.0/24"]
	I0731 22:44:46.272280       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m04"
	I0731 22:45:05.278325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-150891-m04"
	I0731 22:46:04.790411       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-150891-m04"
	I0731 22:46:04.941965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.384307ms"
	I0731 22:46:04.942097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.951µs"
	
	
	==> kube-proxy [45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526] <==
	I0731 22:41:37.271747       1 server_linux.go:69] "Using iptables proxy"
	I0731 22:41:37.292067       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.105"]
	I0731 22:41:37.329876       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 22:41:37.329936       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 22:41:37.329954       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:41:37.333189       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:41:37.333799       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:41:37.333827       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:41:37.335327       1 config.go:192] "Starting service config controller"
	I0731 22:41:37.335755       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:41:37.335806       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:41:37.335822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:41:37.336508       1 config.go:319] "Starting node config controller"
	I0731 22:41:37.336539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:41:37.436754       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:41:37.436810       1 shared_informer.go:320] Caches are synced for service config
	I0731 22:41:37.436865       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78] <==
	W0731 22:41:21.061752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.061892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.079018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:41:21.079065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:41:21.088068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.088125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.163742       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.163825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.239473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.239521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.346622       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 22:41:21.346664       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 22:41:24.138667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 22:43:42.312042       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-df4cg\": pod kube-proxy-df4cg is already assigned to node \"ha-150891-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-df4cg" node="ha-150891-m03"
	E0731 22:43:42.312149       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-df4cg\": pod kube-proxy-df4cg is already assigned to node \"ha-150891-m03\"" pod="kube-system/kube-proxy-df4cg"
	E0731 22:43:42.318233       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8bkwq\": pod kindnet-8bkwq is already assigned to node \"ha-150891-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8bkwq" node="ha-150891-m03"
	E0731 22:43:42.318296       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9d1ea907-d2a6-44ae-8a18-86686b21c2e6(kube-system/kindnet-8bkwq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-8bkwq"
	E0731 22:43:42.318311       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8bkwq\": pod kindnet-8bkwq is already assigned to node \"ha-150891-m03\"" pod="kube-system/kindnet-8bkwq"
	I0731 22:43:42.318329       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8bkwq" node="ha-150891-m03"
	E0731 22:44:46.183027       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-djfjt\": pod kindnet-djfjt is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-djfjt" node="ha-150891-m04"
	E0731 22:44:46.183131       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-djfjt\": pod kindnet-djfjt is already assigned to node \"ha-150891-m04\"" pod="kube-system/kindnet-djfjt"
	E0731 22:44:46.227608       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4ghcd\": pod kindnet-4ghcd is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4ghcd" node="ha-150891-m04"
	E0731 22:44:46.227760       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4ghcd\": pod kindnet-4ghcd is already assigned to node \"ha-150891-m04\"" pod="kube-system/kindnet-4ghcd"
	E0731 22:44:46.228158       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5wxdl\": pod kube-proxy-5wxdl is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5wxdl" node="ha-150891-m04"
	E0731 22:44:46.228265       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5wxdl\": pod kube-proxy-5wxdl is already assigned to node \"ha-150891-m04\"" pod="kube-system/kube-proxy-5wxdl"
	
	
	==> kubelet <==
	Jul 31 22:44:10 ha-150891 kubelet[1359]: E0731 22:44:10.388122    1359 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-150891" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-150891' and this object
	Jul 31 22:44:10 ha-150891 kubelet[1359]: I0731 22:44:10.467610    1359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5v7f\" (UniqueName: \"kubernetes.io/projected/f2b8a59d-2816-4c02-9563-0182ea51e862-kube-api-access-q5v7f\") pod \"busybox-fc5497c4f-98526\" (UID: \"f2b8a59d-2816-4c02-9563-0182ea51e862\") " pod="default/busybox-fc5497c4f-98526"
	Jul 31 22:44:11 ha-150891 kubelet[1359]: E0731 22:44:11.609051    1359 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 22:44:11 ha-150891 kubelet[1359]: E0731 22:44:11.609124    1359 projected.go:200] Error preparing data for projected volume kube-api-access-q5v7f for pod default/busybox-fc5497c4f-98526: failed to sync configmap cache: timed out waiting for the condition
	Jul 31 22:44:11 ha-150891 kubelet[1359]: E0731 22:44:11.609648    1359 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f2b8a59d-2816-4c02-9563-0182ea51e862-kube-api-access-q5v7f podName:f2b8a59d-2816-4c02-9563-0182ea51e862 nodeName:}" failed. No retries permitted until 2024-07-31 22:44:12.109205771 +0000 UTC m=+169.265544806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q5v7f" (UniqueName: "kubernetes.io/projected/f2b8a59d-2816-4c02-9563-0182ea51e862-kube-api-access-q5v7f") pod "busybox-fc5497c4f-98526" (UID: "f2b8a59d-2816-4c02-9563-0182ea51e862") : failed to sync configmap cache: timed out waiting for the condition
	Jul 31 22:44:23 ha-150891 kubelet[1359]: E0731 22:44:23.027458    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:44:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:44:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:44:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:44:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:45:23 ha-150891 kubelet[1359]: E0731 22:45:23.031118    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:45:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:45:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:45:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:45:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:46:23 ha-150891 kubelet[1359]: E0731 22:46:23.027472    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:46:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:46:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:46:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:46:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:47:23 ha-150891 kubelet[1359]: E0731 22:47:23.026828    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:47:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:47:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:47:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:47:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-150891 -n ha-150891
helpers_test.go:261: (dbg) Run:  kubectl --context ha-150891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (3.189747278s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:47:51.286699 1199188 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:47:51.286973 1199188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:51.286982 1199188 out.go:304] Setting ErrFile to fd 2...
	I0731 22:47:51.286987 1199188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:51.287219 1199188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:47:51.287433 1199188 out.go:298] Setting JSON to false
	I0731 22:47:51.287464 1199188 mustload.go:65] Loading cluster: ha-150891
	I0731 22:47:51.287579 1199188 notify.go:220] Checking for updates...
	I0731 22:47:51.287915 1199188 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:47:51.287935 1199188 status.go:255] checking status of ha-150891 ...
	I0731 22:47:51.288402 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:51.288488 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:51.304298 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36149
	I0731 22:47:51.304827 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:51.305518 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:51.305546 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:51.305927 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:51.306157 1199188 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:47:51.308020 1199188 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:47:51.308043 1199188 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:47:51.308398 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:51.308448 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:51.324709 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33597
	I0731 22:47:51.325231 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:51.325822 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:51.325847 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:51.326257 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:51.326513 1199188 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:47:51.330144 1199188 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:51.330691 1199188 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:47:51.330761 1199188 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:51.330883 1199188 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:47:51.331321 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:51.331404 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:51.348942 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34375
	I0731 22:47:51.349381 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:51.349893 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:51.349926 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:51.350332 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:51.350592 1199188 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:47:51.350812 1199188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:51.350845 1199188 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:47:51.353813 1199188 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:51.354191 1199188 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:47:51.354220 1199188 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:51.354436 1199188 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:47:51.354625 1199188 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:47:51.354787 1199188 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:47:51.354973 1199188 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:47:51.444166 1199188 ssh_runner.go:195] Run: systemctl --version
	I0731 22:47:51.458824 1199188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:51.474981 1199188 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:47:51.475016 1199188 api_server.go:166] Checking apiserver status ...
	I0731 22:47:51.475051 1199188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:47:51.491735 1199188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:47:51.503550 1199188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:47:51.503626 1199188 ssh_runner.go:195] Run: ls
	I0731 22:47:51.511923 1199188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:47:51.518034 1199188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:47:51.518063 1199188 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:47:51.518073 1199188 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:47:51.518090 1199188 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:47:51.518377 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:51.518400 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:51.534105 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40733
	I0731 22:47:51.534643 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:51.535156 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:51.535179 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:51.535474 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:51.535660 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:47:51.537439 1199188 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:47:51.537459 1199188 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:47:51.537761 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:51.537805 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:51.553617 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45631
	I0731 22:47:51.554035 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:51.554516 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:51.554545 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:51.554857 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:51.555019 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:47:51.557692 1199188 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:51.558094 1199188 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:47:51.558123 1199188 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:51.558267 1199188 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:47:51.558620 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:51.558656 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:51.573826 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37845
	I0731 22:47:51.574280 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:51.574756 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:51.574780 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:51.575098 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:51.575379 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:47:51.575576 1199188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:51.575603 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:47:51.578435 1199188 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:51.578825 1199188 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:47:51.578848 1199188 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:51.579024 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:47:51.579181 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:47:51.579343 1199188 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:47:51.579461 1199188 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:47:54.080479 1199188 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:47:54.080607 1199188 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:47:54.080624 1199188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:47:54.080631 1199188 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:47:54.080652 1199188 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:47:54.080666 1199188 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:47:54.080976 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:54.081020 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:54.096708 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35533
	I0731 22:47:54.097211 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:54.097740 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:54.097761 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:54.098136 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:54.098358 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:47:54.100111 1199188 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:47:54.100129 1199188 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:47:54.100424 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:54.100464 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:54.115718 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I0731 22:47:54.116271 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:54.116791 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:54.116822 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:54.117135 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:54.117321 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:47:54.120191 1199188 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:54.120634 1199188 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:47:54.120664 1199188 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:54.120785 1199188 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:47:54.121078 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:54.121117 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:54.137124 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43381
	I0731 22:47:54.137587 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:54.138064 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:54.138084 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:54.138482 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:54.138692 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:47:54.138865 1199188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:54.138888 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:47:54.141720 1199188 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:54.142148 1199188 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:47:54.142180 1199188 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:47:54.142336 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:47:54.142540 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:47:54.142753 1199188 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:47:54.142920 1199188 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:47:54.223011 1199188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:54.237839 1199188 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:47:54.237873 1199188 api_server.go:166] Checking apiserver status ...
	I0731 22:47:54.237917 1199188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:47:54.253138 1199188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:47:54.268122 1199188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:47:54.268178 1199188 ssh_runner.go:195] Run: ls
	I0731 22:47:54.272726 1199188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:47:54.277172 1199188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:47:54.277205 1199188 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:47:54.277216 1199188 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:47:54.277237 1199188 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:47:54.277668 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:54.277705 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:54.292983 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0731 22:47:54.293405 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:54.293937 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:54.293957 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:54.294331 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:54.294537 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:47:54.296490 1199188 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:47:54.296507 1199188 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:47:54.296828 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:54.296877 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:54.312384 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40917
	I0731 22:47:54.313025 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:54.313556 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:54.313576 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:54.313955 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:54.314149 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:47:54.317145 1199188 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:54.317576 1199188 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:47:54.317598 1199188 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:54.317783 1199188 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:47:54.318180 1199188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:54.318228 1199188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:54.333689 1199188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35757
	I0731 22:47:54.334136 1199188 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:54.334605 1199188 main.go:141] libmachine: Using API Version  1
	I0731 22:47:54.334629 1199188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:54.334955 1199188 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:54.335155 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:47:54.335334 1199188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:54.335357 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:47:54.338066 1199188 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:54.338499 1199188 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:47:54.338526 1199188 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:47:54.338715 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:47:54.338907 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:47:54.339072 1199188 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:47:54.339238 1199188 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:47:54.415359 1199188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:54.430493 1199188 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (5.184242831s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:47:55.435735 1199288 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:47:55.435848 1199288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:55.435856 1199288 out.go:304] Setting ErrFile to fd 2...
	I0731 22:47:55.435861 1199288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:47:55.436065 1199288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:47:55.436284 1199288 out.go:298] Setting JSON to false
	I0731 22:47:55.436311 1199288 mustload.go:65] Loading cluster: ha-150891
	I0731 22:47:55.436424 1199288 notify.go:220] Checking for updates...
	I0731 22:47:55.436787 1199288 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:47:55.436806 1199288 status.go:255] checking status of ha-150891 ...
	I0731 22:47:55.437398 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:55.437467 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:55.453689 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0731 22:47:55.454192 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:55.454868 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:47:55.454889 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:55.455272 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:55.455511 1199288 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:47:55.457113 1199288 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:47:55.457146 1199288 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:47:55.457473 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:55.457511 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:55.473041 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0731 22:47:55.473540 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:55.474149 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:47:55.474194 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:55.474643 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:55.474871 1199288 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:47:55.479853 1199288 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:55.480304 1199288 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:47:55.480334 1199288 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:55.480510 1199288 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:47:55.480822 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:55.480868 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:55.497821 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0731 22:47:55.498294 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:55.498948 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:47:55.498976 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:55.499319 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:55.499578 1199288 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:47:55.499772 1199288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:55.499823 1199288 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:47:55.502881 1199288 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:55.503403 1199288 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:47:55.503430 1199288 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:47:55.503657 1199288 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:47:55.503866 1199288 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:47:55.504044 1199288 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:47:55.504341 1199288 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:47:55.591620 1199288 ssh_runner.go:195] Run: systemctl --version
	I0731 22:47:55.597722 1199288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:47:55.613236 1199288 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:47:55.613269 1199288 api_server.go:166] Checking apiserver status ...
	I0731 22:47:55.613304 1199288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:47:55.627153 1199288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:47:55.636544 1199288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:47:55.636611 1199288 ssh_runner.go:195] Run: ls
	I0731 22:47:55.641013 1199288 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:47:55.646754 1199288 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:47:55.646802 1199288 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:47:55.646817 1199288 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:47:55.646845 1199288 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:47:55.647182 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:55.647214 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:55.662600 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39427
	I0731 22:47:55.663169 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:55.663738 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:47:55.663771 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:55.664100 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:55.664331 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:47:55.665884 1199288 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:47:55.665901 1199288 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:47:55.666249 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:55.666282 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:55.681457 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41277
	I0731 22:47:55.681985 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:55.682551 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:47:55.682574 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:55.682952 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:55.683169 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:47:55.686266 1199288 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:55.686762 1199288 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:47:55.686799 1199288 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:55.686916 1199288 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:47:55.687262 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:47:55.687316 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:47:55.703002 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I0731 22:47:55.703473 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:47:55.703954 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:47:55.703978 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:47:55.704322 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:47:55.704554 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:47:55.704747 1199288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:47:55.704771 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:47:55.707733 1199288 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:55.708185 1199288 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:47:55.708213 1199288 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:47:55.708390 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:47:55.708616 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:47:55.708782 1199288 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:47:55.708964 1199288 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:47:57.152494 1199288 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:47:57.152557 1199288 retry.go:31] will retry after 256.669433ms: dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:00.228372 1199288 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:00.228486 1199288 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:48:00.228512 1199288 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:00.228524 1199288 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:48:00.228564 1199288 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:00.228577 1199288 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:48:00.228934 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:00.228986 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:00.245014 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36803
	I0731 22:48:00.245552 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:00.246061 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:48:00.246086 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:00.246506 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:00.246760 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:00.248518 1199288 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:48:00.248537 1199288 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:00.248936 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:00.248992 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:00.264529 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
	I0731 22:48:00.265006 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:00.265572 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:48:00.265597 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:00.265946 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:00.266147 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:48:00.268755 1199288 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:00.269172 1199288 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:00.269202 1199288 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:00.269369 1199288 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:00.269695 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:00.269742 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:00.285012 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37463
	I0731 22:48:00.285540 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:00.286020 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:48:00.286043 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:00.286372 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:00.286553 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:00.286734 1199288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:00.286755 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:00.289752 1199288 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:00.290116 1199288 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:00.290152 1199288 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:00.290325 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:00.290511 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:00.290676 1199288 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:00.290798 1199288 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:00.373445 1199288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:00.387465 1199288 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:00.387498 1199288 api_server.go:166] Checking apiserver status ...
	I0731 22:48:00.387531 1199288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:00.401807 1199288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:48:00.412159 1199288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:00.412220 1199288 ssh_runner.go:195] Run: ls
	I0731 22:48:00.416820 1199288 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:00.421259 1199288 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:00.421288 1199288 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:48:00.421296 1199288 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:00.421316 1199288 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:48:00.421633 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:00.421661 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:00.436994 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41801
	I0731 22:48:00.437527 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:00.437991 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:48:00.438040 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:00.438468 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:00.438692 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:00.440258 1199288 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:48:00.440286 1199288 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:00.440593 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:00.440618 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:00.456003 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I0731 22:48:00.456517 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:00.457002 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:48:00.457023 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:00.457340 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:00.457565 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:48:00.460898 1199288 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:00.461314 1199288 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:00.461339 1199288 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:00.461494 1199288 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:00.461847 1199288 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:00.461891 1199288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:00.477226 1199288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0731 22:48:00.477690 1199288 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:00.478157 1199288 main.go:141] libmachine: Using API Version  1
	I0731 22:48:00.478183 1199288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:00.478518 1199288 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:00.478717 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:00.478909 1199288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:00.478933 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:00.481569 1199288 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:00.482007 1199288 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:00.482036 1199288 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:00.482247 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:00.482423 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:00.482589 1199288 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:00.482733 1199288 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:00.558883 1199288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:00.573189 1199288 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (4.527848227s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:48:02.564683 1199395 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:48:02.564933 1199395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:02.564941 1199395 out.go:304] Setting ErrFile to fd 2...
	I0731 22:48:02.564945 1199395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:02.565150 1199395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:48:02.565306 1199395 out.go:298] Setting JSON to false
	I0731 22:48:02.565332 1199395 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:02.565432 1199395 notify.go:220] Checking for updates...
	I0731 22:48:02.565715 1199395 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:02.565733 1199395 status.go:255] checking status of ha-150891 ...
	I0731 22:48:02.566152 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:02.566212 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:02.582697 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39213
	I0731 22:48:02.583197 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:02.583948 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:02.583996 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:02.584393 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:02.584607 1199395 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:48:02.586326 1199395 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:48:02.586355 1199395 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:02.586663 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:02.586701 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:02.603219 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0731 22:48:02.603720 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:02.604337 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:02.604365 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:02.604725 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:02.604921 1199395 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:48:02.607977 1199395 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:02.608437 1199395 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:02.608466 1199395 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:02.608649 1199395 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:02.609030 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:02.609083 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:02.625801 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38245
	I0731 22:48:02.626263 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:02.626720 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:02.626744 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:02.627072 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:02.627288 1199395 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:48:02.627467 1199395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:02.627492 1199395 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:48:02.630551 1199395 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:02.630964 1199395 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:02.630997 1199395 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:02.631205 1199395 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:48:02.631428 1199395 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:48:02.631618 1199395 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:48:02.631800 1199395 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:48:02.715724 1199395 ssh_runner.go:195] Run: systemctl --version
	I0731 22:48:02.722165 1199395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:02.737608 1199395 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:02.737643 1199395 api_server.go:166] Checking apiserver status ...
	I0731 22:48:02.737684 1199395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:02.752842 1199395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:48:02.763314 1199395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:02.763374 1199395 ssh_runner.go:195] Run: ls
	I0731 22:48:02.767860 1199395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:02.772211 1199395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:02.772244 1199395 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:48:02.772254 1199395 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:02.772272 1199395 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:48:02.772590 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:02.772618 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:02.788042 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46593
	I0731 22:48:02.788516 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:02.789069 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:02.789093 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:02.789430 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:02.789664 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:48:02.791211 1199395 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:48:02.791249 1199395 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:02.791657 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:02.791701 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:02.808269 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41101
	I0731 22:48:02.808772 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:02.809318 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:02.809341 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:02.809644 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:02.809797 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:48:02.812727 1199395 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:02.813163 1199395 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:02.813197 1199395 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:02.813334 1199395 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:02.813759 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:02.813814 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:02.829338 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46025
	I0731 22:48:02.829914 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:02.830442 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:02.830472 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:02.830853 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:02.831053 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:48:02.831290 1199395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:02.831315 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:48:02.834391 1199395 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:02.834857 1199395 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:02.834886 1199395 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:02.835070 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:48:02.835287 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:48:02.835432 1199395 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:48:02.835604 1199395 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:48:03.300324 1199395 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:03.300406 1199395 retry.go:31] will retry after 337.946917ms: dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:06.688396 1199395 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:06.688532 1199395 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:48:06.688556 1199395 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:06.688565 1199395 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:48:06.688589 1199395 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:06.688598 1199395 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:48:06.688908 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:06.688955 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:06.704476 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37395
	I0731 22:48:06.704933 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:06.705363 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:06.705388 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:06.705719 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:06.705938 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:06.707558 1199395 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:48:06.707580 1199395 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:06.707898 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:06.707942 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:06.723497 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0731 22:48:06.723980 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:06.724450 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:06.724480 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:06.724825 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:06.724995 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:48:06.727951 1199395 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:06.728402 1199395 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:06.728423 1199395 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:06.728581 1199395 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:06.729034 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:06.729084 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:06.745274 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43975
	I0731 22:48:06.745748 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:06.746275 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:06.746307 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:06.746743 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:06.746959 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:06.747151 1199395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:06.747170 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:06.750154 1199395 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:06.750617 1199395 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:06.750646 1199395 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:06.750756 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:06.750979 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:06.751155 1199395 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:06.751289 1199395 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:06.831538 1199395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:06.846791 1199395 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:06.846834 1199395 api_server.go:166] Checking apiserver status ...
	I0731 22:48:06.846884 1199395 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:06.860675 1199395 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:48:06.870289 1199395 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:06.870362 1199395 ssh_runner.go:195] Run: ls
	I0731 22:48:06.874722 1199395 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:06.880143 1199395 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:06.880176 1199395 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:48:06.880188 1199395 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:06.880211 1199395 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:48:06.880634 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:06.880666 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:06.896795 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38937
	I0731 22:48:06.897266 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:06.897819 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:06.897843 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:06.898150 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:06.898353 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:06.899953 1199395 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:48:06.899977 1199395 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:06.900333 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:06.900357 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:06.916786 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41431
	I0731 22:48:06.917276 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:06.917759 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:06.917786 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:06.918115 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:06.918316 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:48:06.921179 1199395 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:06.921616 1199395 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:06.921646 1199395 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:06.921825 1199395 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:06.922251 1199395 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:06.922300 1199395 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:06.937712 1199395 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0731 22:48:06.938237 1199395 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:06.938775 1199395 main.go:141] libmachine: Using API Version  1
	I0731 22:48:06.938797 1199395 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:06.939126 1199395 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:06.939356 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:06.939573 1199395 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:06.939597 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:06.942541 1199395 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:06.942978 1199395 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:06.943006 1199395 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:06.943171 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:06.943360 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:06.943513 1199395 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:06.943676 1199395 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:07.027023 1199395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:07.041859 1199395 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (4.893753043s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:48:08.339149 1199496 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:48:08.339422 1199496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:08.339432 1199496 out.go:304] Setting ErrFile to fd 2...
	I0731 22:48:08.339437 1199496 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:08.339611 1199496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:48:08.339806 1199496 out.go:298] Setting JSON to false
	I0731 22:48:08.339835 1199496 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:08.339943 1199496 notify.go:220] Checking for updates...
	I0731 22:48:08.340407 1199496 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:08.340429 1199496 status.go:255] checking status of ha-150891 ...
	I0731 22:48:08.341011 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:08.341086 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:08.362435 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0731 22:48:08.362968 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:08.363559 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:08.363581 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:08.364067 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:08.364363 1199496 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:48:08.366070 1199496 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:48:08.366090 1199496 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:08.366392 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:08.366443 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:08.382424 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42523
	I0731 22:48:08.382856 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:08.383429 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:08.383455 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:08.383941 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:08.384210 1199496 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:48:08.387354 1199496 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:08.387762 1199496 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:08.387799 1199496 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:08.387918 1199496 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:08.388315 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:08.388367 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:08.404330 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33871
	I0731 22:48:08.404847 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:08.405415 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:08.405438 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:08.405828 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:08.406037 1199496 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:48:08.406262 1199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:08.406299 1199496 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:48:08.409559 1199496 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:08.410033 1199496 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:08.410075 1199496 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:08.410316 1199496 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:48:08.410521 1199496 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:48:08.410686 1199496 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:48:08.410873 1199496 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:48:08.495665 1199496 ssh_runner.go:195] Run: systemctl --version
	I0731 22:48:08.501825 1199496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:08.519657 1199496 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:08.519695 1199496 api_server.go:166] Checking apiserver status ...
	I0731 22:48:08.519751 1199496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:08.534486 1199496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:48:08.544726 1199496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:08.544800 1199496 ssh_runner.go:195] Run: ls
	I0731 22:48:08.550676 1199496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:08.554794 1199496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:08.554823 1199496 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:48:08.554834 1199496 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:08.554854 1199496 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:48:08.555136 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:08.555176 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:08.570664 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0731 22:48:08.571184 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:08.571753 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:08.571782 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:08.572149 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:08.572343 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:48:08.573846 1199496 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:48:08.573863 1199496 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:08.574166 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:08.574205 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:08.590066 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41181
	I0731 22:48:08.590539 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:08.591081 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:08.591104 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:08.591434 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:08.591817 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:48:08.595069 1199496 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:08.595530 1199496 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:08.595558 1199496 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:08.595739 1199496 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:08.596058 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:08.596135 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:08.612012 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I0731 22:48:08.612563 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:08.613064 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:08.613084 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:08.613419 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:08.613614 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:48:08.613820 1199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:08.613840 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:48:08.616898 1199496 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:08.617381 1199496 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:08.617402 1199496 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:08.617552 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:48:08.617765 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:48:08.617924 1199496 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:48:08.618111 1199496 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:48:09.760484 1199496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:09.760551 1199496 retry.go:31] will retry after 288.516488ms: dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:12.836446 1199496 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:12.836571 1199496 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:48:12.836604 1199496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:12.836617 1199496 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:48:12.836648 1199496 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:12.836659 1199496 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:48:12.836995 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:12.837053 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:12.852679 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46535
	I0731 22:48:12.853155 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:12.853722 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:12.853753 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:12.854080 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:12.854294 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:12.855916 1199496 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:48:12.855935 1199496 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:12.856286 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:12.856324 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:12.872758 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36131
	I0731 22:48:12.873262 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:12.873823 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:12.873851 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:12.874212 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:12.874399 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:48:12.877414 1199496 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:12.877855 1199496 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:12.877880 1199496 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:12.878015 1199496 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:12.878354 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:12.878398 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:12.893545 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0731 22:48:12.893978 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:12.894416 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:12.894437 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:12.894774 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:12.895027 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:12.895228 1199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:12.895249 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:12.898315 1199496 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:12.898706 1199496 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:12.898737 1199496 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:12.898842 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:12.899027 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:12.899151 1199496 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:12.899277 1199496 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:12.979348 1199496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:12.994107 1199496 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:12.994140 1199496 api_server.go:166] Checking apiserver status ...
	I0731 22:48:12.994173 1199496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:13.008116 1199496 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:48:13.017401 1199496 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:13.017471 1199496 ssh_runner.go:195] Run: ls
	I0731 22:48:13.021757 1199496 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:13.026009 1199496 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:13.026038 1199496 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:48:13.026048 1199496 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:13.026063 1199496 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:48:13.026377 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:13.026403 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:13.043171 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39999
	I0731 22:48:13.043624 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:13.044115 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:13.044142 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:13.044469 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:13.044680 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:13.046487 1199496 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:48:13.046506 1199496 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:13.046992 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:13.047028 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:13.062908 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0731 22:48:13.063402 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:13.063880 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:13.063902 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:13.064253 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:13.064436 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:48:13.067139 1199496 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:13.067541 1199496 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:13.067584 1199496 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:13.067852 1199496 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:13.068240 1199496 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:13.068297 1199496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:13.084462 1199496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0731 22:48:13.084952 1199496 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:13.085496 1199496 main.go:141] libmachine: Using API Version  1
	I0731 22:48:13.085518 1199496 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:13.085882 1199496 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:13.086099 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:13.086299 1199496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:13.086322 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:13.089245 1199496 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:13.089672 1199496 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:13.089707 1199496 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:13.089883 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:13.090047 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:13.090170 1199496 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:13.090279 1199496 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:13.166977 1199496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:13.181656 1199496 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (3.732946528s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:48:17.656811 1199611 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:48:17.657255 1199611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:17.657268 1199611 out.go:304] Setting ErrFile to fd 2...
	I0731 22:48:17.657275 1199611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:17.657830 1199611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:48:17.658168 1199611 out.go:298] Setting JSON to false
	I0731 22:48:17.658211 1199611 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:17.658300 1199611 notify.go:220] Checking for updates...
	I0731 22:48:17.658952 1199611 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:17.658976 1199611 status.go:255] checking status of ha-150891 ...
	I0731 22:48:17.659470 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:17.659533 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:17.675649 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34751
	I0731 22:48:17.676182 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:17.676707 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:17.676730 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:17.677147 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:17.677415 1199611 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:48:17.679165 1199611 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:48:17.679182 1199611 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:17.679465 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:17.679511 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:17.694903 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35095
	I0731 22:48:17.695388 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:17.695900 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:17.695941 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:17.696295 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:17.696507 1199611 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:48:17.699120 1199611 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:17.699586 1199611 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:17.699612 1199611 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:17.699736 1199611 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:17.700072 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:17.700145 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:17.715496 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I0731 22:48:17.716000 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:17.718125 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:17.718169 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:17.718596 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:17.718839 1199611 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:48:17.719088 1199611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:17.719127 1199611 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:48:17.722294 1199611 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:17.722815 1199611 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:17.722846 1199611 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:17.722988 1199611 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:48:17.723190 1199611 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:48:17.723353 1199611 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:48:17.723513 1199611 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:48:17.807700 1199611 ssh_runner.go:195] Run: systemctl --version
	I0731 22:48:17.814192 1199611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:17.833697 1199611 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:17.833731 1199611 api_server.go:166] Checking apiserver status ...
	I0731 22:48:17.833803 1199611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:17.851720 1199611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:48:17.861651 1199611 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:17.861710 1199611 ssh_runner.go:195] Run: ls
	I0731 22:48:17.866450 1199611 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:17.872406 1199611 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:17.872439 1199611 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:48:17.872450 1199611 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:17.872475 1199611 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:48:17.872807 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:17.872835 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:17.888723 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0731 22:48:17.889136 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:17.889640 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:17.889665 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:17.889984 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:17.890174 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:48:17.891836 1199611 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:48:17.891857 1199611 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:17.892189 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:17.892228 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:17.908333 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42167
	I0731 22:48:17.908850 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:17.909366 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:17.909387 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:17.909677 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:17.909874 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:48:17.912831 1199611 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:17.913237 1199611 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:17.913268 1199611 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:17.913486 1199611 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:17.913789 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:17.913812 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:17.929220 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35339
	I0731 22:48:17.929827 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:17.930377 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:17.930400 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:17.930774 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:17.931023 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:48:17.931248 1199611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:17.931272 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:48:17.934334 1199611 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:17.934797 1199611 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:17.934871 1199611 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:17.935098 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:48:17.935297 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:48:17.935449 1199611 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:48:17.935603 1199611 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:48:20.992400 1199611 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:20.992504 1199611 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:48:20.992520 1199611 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:20.992528 1199611 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:48:20.992547 1199611 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:20.992569 1199611 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:48:20.992886 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:20.992931 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:21.008437 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42697
	I0731 22:48:21.008964 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:21.009489 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:21.009519 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:21.009934 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:21.010145 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:21.012006 1199611 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:48:21.012028 1199611 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:21.012367 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:21.012415 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:21.027886 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0731 22:48:21.028427 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:21.029031 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:21.029058 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:21.029394 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:21.029590 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:48:21.032959 1199611 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:21.033404 1199611 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:21.033435 1199611 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:21.033589 1199611 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:21.033898 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:21.033959 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:21.050839 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0731 22:48:21.051274 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:21.051853 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:21.051878 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:21.052374 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:21.052578 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:21.052801 1199611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:21.052829 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:21.055430 1199611 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:21.055906 1199611 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:21.055930 1199611 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:21.056119 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:21.056307 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:21.056433 1199611 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:21.056552 1199611 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:21.135149 1199611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:21.150338 1199611 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:21.150376 1199611 api_server.go:166] Checking apiserver status ...
	I0731 22:48:21.150416 1199611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:21.164854 1199611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:48:21.174983 1199611 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:21.175050 1199611 ssh_runner.go:195] Run: ls
	I0731 22:48:21.179821 1199611 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:21.183971 1199611 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:21.184004 1199611 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:48:21.184016 1199611 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:21.184035 1199611 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:48:21.184465 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:21.184497 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:21.200149 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35881
	I0731 22:48:21.200573 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:21.201040 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:21.201059 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:21.201470 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:21.201743 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:21.203284 1199611 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:48:21.203302 1199611 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:21.203649 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:21.203695 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:21.220384 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43235
	I0731 22:48:21.220809 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:21.221291 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:21.221317 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:21.221751 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:21.221960 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:48:21.224752 1199611 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:21.225222 1199611 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:21.225275 1199611 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:21.225345 1199611 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:21.225648 1199611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:21.225671 1199611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:21.241013 1199611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45867
	I0731 22:48:21.241452 1199611 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:21.242030 1199611 main.go:141] libmachine: Using API Version  1
	I0731 22:48:21.242056 1199611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:21.242363 1199611 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:21.242571 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:21.242773 1199611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:21.242797 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:21.245978 1199611 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:21.246467 1199611 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:21.246497 1199611 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:21.246693 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:21.246874 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:21.247054 1199611 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:21.247279 1199611 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:21.327252 1199611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:21.342235 1199611 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (3.728835134s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:48:26.338842 1199728 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:48:26.338947 1199728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:26.338953 1199728 out.go:304] Setting ErrFile to fd 2...
	I0731 22:48:26.338958 1199728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:26.339164 1199728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:48:26.339379 1199728 out.go:298] Setting JSON to false
	I0731 22:48:26.339412 1199728 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:26.339460 1199728 notify.go:220] Checking for updates...
	I0731 22:48:26.339847 1199728 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:26.339867 1199728 status.go:255] checking status of ha-150891 ...
	I0731 22:48:26.340313 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:26.340371 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:26.361436 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46359
	I0731 22:48:26.361935 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:26.362635 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:26.362664 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:26.363098 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:26.363333 1199728 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:48:26.365232 1199728 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:48:26.365264 1199728 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:26.365565 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:26.365609 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:26.381503 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0731 22:48:26.382005 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:26.382514 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:26.382538 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:26.382890 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:26.383171 1199728 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:48:26.386645 1199728 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:26.387059 1199728 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:26.387093 1199728 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:26.387213 1199728 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:26.387527 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:26.387567 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:26.402889 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I0731 22:48:26.403350 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:26.403880 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:26.403915 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:26.404328 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:26.404552 1199728 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:48:26.404812 1199728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:26.404842 1199728 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:48:26.407768 1199728 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:26.408228 1199728 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:26.408252 1199728 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:26.408507 1199728 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:48:26.408744 1199728 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:48:26.408906 1199728 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:48:26.409051 1199728 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:48:26.495557 1199728 ssh_runner.go:195] Run: systemctl --version
	I0731 22:48:26.501735 1199728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:26.516193 1199728 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:26.516228 1199728 api_server.go:166] Checking apiserver status ...
	I0731 22:48:26.516278 1199728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:26.530999 1199728 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:48:26.540287 1199728 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:26.540346 1199728 ssh_runner.go:195] Run: ls
	I0731 22:48:26.544701 1199728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:26.548916 1199728 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:26.548942 1199728 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:48:26.548951 1199728 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:26.548969 1199728 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:48:26.549254 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:26.549298 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:26.564775 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46587
	I0731 22:48:26.565225 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:26.565778 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:26.565806 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:26.566160 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:26.566358 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:48:26.568264 1199728 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:48:26.568290 1199728 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:26.568695 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:26.568727 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:26.585300 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I0731 22:48:26.585757 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:26.586383 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:26.586413 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:26.586825 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:26.587049 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:48:26.590024 1199728 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:26.590591 1199728 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:26.590627 1199728 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:26.590933 1199728 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:48:26.591248 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:26.591276 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:26.606834 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37769
	I0731 22:48:26.607310 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:26.607822 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:26.607841 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:26.608174 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:26.608382 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:48:26.608597 1199728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:26.608622 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:48:26.611304 1199728 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:26.611748 1199728 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:48:26.611778 1199728 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:48:26.611917 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:48:26.612172 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:48:26.612388 1199728 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:48:26.612565 1199728 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	W0731 22:48:29.664375 1199728 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.224:22: connect: no route to host
	W0731 22:48:29.664478 1199728 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	E0731 22:48:29.664496 1199728 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:29.664513 1199728 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:48:29.664533 1199728 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.224:22: connect: no route to host
	I0731 22:48:29.664551 1199728 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:48:29.664861 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:29.664906 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:29.681103 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35959
	I0731 22:48:29.681563 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:29.682022 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:29.682051 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:29.682366 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:29.682562 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:29.684636 1199728 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:48:29.684661 1199728 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:29.684971 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:29.685018 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:29.700381 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41899
	I0731 22:48:29.700846 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:29.701320 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:29.701347 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:29.701706 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:29.701893 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:48:29.705210 1199728 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:29.705677 1199728 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:29.705706 1199728 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:29.705871 1199728 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:29.706203 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:29.706245 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:29.721469 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45485
	I0731 22:48:29.722026 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:29.722533 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:29.722562 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:29.722881 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:29.723079 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:29.723228 1199728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:29.723244 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:29.725896 1199728 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:29.726472 1199728 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:29.726500 1199728 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:29.726707 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:29.726910 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:29.727094 1199728 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:29.727234 1199728 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:29.811643 1199728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:29.826899 1199728 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:29.826929 1199728 api_server.go:166] Checking apiserver status ...
	I0731 22:48:29.826966 1199728 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:29.842240 1199728 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:48:29.851898 1199728 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:29.851960 1199728 ssh_runner.go:195] Run: ls
	I0731 22:48:29.856925 1199728 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:29.861360 1199728 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:29.861395 1199728 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:48:29.861407 1199728 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:29.861436 1199728 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:48:29.861810 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:29.861840 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:29.877659 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
	I0731 22:48:29.878184 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:29.878679 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:29.878701 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:29.879055 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:29.879257 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:29.881012 1199728 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:48:29.881033 1199728 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:29.881358 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:29.881382 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:29.898136 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0731 22:48:29.898541 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:29.899039 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:29.899066 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:29.899385 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:29.899575 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:48:29.902419 1199728 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:29.902863 1199728 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:29.902894 1199728 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:29.903046 1199728 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:29.903350 1199728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:29.903392 1199728 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:29.918939 1199728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45193
	I0731 22:48:29.919400 1199728 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:29.919875 1199728 main.go:141] libmachine: Using API Version  1
	I0731 22:48:29.919905 1199728 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:29.920295 1199728 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:29.920498 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:29.920704 1199728 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:29.920726 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:29.923539 1199728 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:29.924045 1199728 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:29.924074 1199728 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:29.924255 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:29.924422 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:29.924587 1199728 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:29.924693 1199728 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:30.003322 1199728 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:30.018526 1199728 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 7 (630.351613ms)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-150891-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:48:38.975260 1199866 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:48:38.975567 1199866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:38.975580 1199866 out.go:304] Setting ErrFile to fd 2...
	I0731 22:48:38.975587 1199866 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:38.975804 1199866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:48:38.975990 1199866 out.go:298] Setting JSON to false
	I0731 22:48:38.976019 1199866 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:38.976143 1199866 notify.go:220] Checking for updates...
	I0731 22:48:38.976429 1199866 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:38.976450 1199866 status.go:255] checking status of ha-150891 ...
	I0731 22:48:38.976944 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:38.977019 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:38.992929 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32883
	I0731 22:48:38.993492 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:38.994105 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:38.994132 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:38.994535 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:38.994757 1199866 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:48:38.996599 1199866 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:48:38.996626 1199866 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:38.996924 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:38.996970 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.012574 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33589
	I0731 22:48:39.013131 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.013788 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.013821 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.014204 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.014413 1199866 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:48:39.017874 1199866 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:39.018354 1199866 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:39.018387 1199866 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:39.018557 1199866 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:48:39.018860 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.018917 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.035474 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44599
	I0731 22:48:39.036063 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.036597 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.036661 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.037005 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.037204 1199866 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:48:39.037422 1199866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:39.037448 1199866 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:48:39.040323 1199866 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:39.040827 1199866 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:48:39.040860 1199866 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:48:39.041030 1199866 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:48:39.041258 1199866 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:48:39.041387 1199866 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:48:39.041705 1199866 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:48:39.123586 1199866 ssh_runner.go:195] Run: systemctl --version
	I0731 22:48:39.129610 1199866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:39.143955 1199866 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:39.144001 1199866 api_server.go:166] Checking apiserver status ...
	I0731 22:48:39.144048 1199866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:39.159447 1199866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0731 22:48:39.170079 1199866 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:39.170138 1199866 ssh_runner.go:195] Run: ls
	I0731 22:48:39.174791 1199866 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:39.179142 1199866 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:39.179175 1199866 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:48:39.179189 1199866 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:39.179212 1199866 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:48:39.179514 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.179546 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.195291 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0731 22:48:39.195719 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.196246 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.196266 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.196617 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.196826 1199866 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:48:39.198419 1199866 status.go:330] ha-150891-m02 host status = "Stopped" (err=<nil>)
	I0731 22:48:39.198436 1199866 status.go:343] host is not running, skipping remaining checks
	I0731 22:48:39.198443 1199866 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:39.198468 1199866 status.go:255] checking status of ha-150891-m03 ...
	I0731 22:48:39.198784 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.198813 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.214251 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I0731 22:48:39.214704 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.215206 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.215228 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.215539 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.215734 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:39.217439 1199866 status.go:330] ha-150891-m03 host status = "Running" (err=<nil>)
	I0731 22:48:39.217458 1199866 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:39.217771 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.217820 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.235364 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42533
	I0731 22:48:39.235838 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.236329 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.236363 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.236703 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.236967 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:48:39.239779 1199866 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:39.240300 1199866 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:39.240332 1199866 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:39.240454 1199866 host.go:66] Checking if "ha-150891-m03" exists ...
	I0731 22:48:39.240798 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.240850 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.257110 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43119
	I0731 22:48:39.257723 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.258245 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.258271 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.258590 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.258807 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:39.259027 1199866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:39.259056 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:39.262308 1199866 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:39.262865 1199866 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:39.262898 1199866 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:39.263076 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:39.263327 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:39.263508 1199866 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:39.263662 1199866 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:39.351755 1199866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:39.366146 1199866 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:48:39.366178 1199866 api_server.go:166] Checking apiserver status ...
	I0731 22:48:39.366227 1199866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:48:39.382315 1199866 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup
	W0731 22:48:39.391910 1199866 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1560/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:48:39.391969 1199866 ssh_runner.go:195] Run: ls
	I0731 22:48:39.397365 1199866 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:48:39.401964 1199866 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:48:39.402001 1199866 status.go:422] ha-150891-m03 apiserver status = Running (err=<nil>)
	I0731 22:48:39.402013 1199866 status.go:257] ha-150891-m03 status: &{Name:ha-150891-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:48:39.402030 1199866 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:48:39.402451 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.402488 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.419168 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38923
	I0731 22:48:39.419653 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.420199 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.420222 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.420560 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.420753 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:39.422304 1199866 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:48:39.422324 1199866 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:39.422596 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.422619 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.438863 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45355
	I0731 22:48:39.439409 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.439839 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.439859 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.440218 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.440414 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:48:39.443443 1199866 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:39.443858 1199866 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:39.443894 1199866 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:39.444057 1199866 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:48:39.444438 1199866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:39.444491 1199866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:39.460408 1199866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0731 22:48:39.460974 1199866 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:39.461506 1199866 main.go:141] libmachine: Using API Version  1
	I0731 22:48:39.461527 1199866 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:39.461917 1199866 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:39.462133 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:39.462338 1199866 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:48:39.462362 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:39.465398 1199866 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:39.465797 1199866 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:39.465839 1199866 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:39.466023 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:39.466220 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:39.466402 1199866 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:39.466555 1199866 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:39.543515 1199866 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:48:39.558085 1199866 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-150891 -n ha-150891
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-150891 logs -n 25: (1.374508739s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891:/home/docker/cp-test_ha-150891-m03_ha-150891.txt                       |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891 sudo cat                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891.txt                                 |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m04 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp testdata/cp-test.txt                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891:/home/docker/cp-test_ha-150891-m04_ha-150891.txt                       |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891 sudo cat                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891.txt                                 |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03:/home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m03 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-150891 node stop m02 -v=7                                                     | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-150891 node start m02 -v=7                                                    | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:40:40
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:40:40.501333 1194386 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:40:40.501605 1194386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:40.501613 1194386 out.go:304] Setting ErrFile to fd 2...
	I0731 22:40:40.501617 1194386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:40.501819 1194386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:40:40.502468 1194386 out.go:298] Setting JSON to false
	I0731 22:40:40.503429 1194386 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":22991,"bootTime":1722442649,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 22:40:40.503497 1194386 start.go:139] virtualization: kvm guest
	I0731 22:40:40.505751 1194386 out.go:177] * [ha-150891] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 22:40:40.507210 1194386 notify.go:220] Checking for updates...
	I0731 22:40:40.507218 1194386 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:40:40.508910 1194386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:40:40.510277 1194386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:40:40.511652 1194386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:40.512941 1194386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 22:40:40.514171 1194386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:40:40.515483 1194386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:40:40.553750 1194386 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 22:40:40.554943 1194386 start.go:297] selected driver: kvm2
	I0731 22:40:40.554960 1194386 start.go:901] validating driver "kvm2" against <nil>
	I0731 22:40:40.554999 1194386 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:40:40.555780 1194386 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:40:40.555881 1194386 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 22:40:40.571732 1194386 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 22:40:40.571800 1194386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 22:40:40.572052 1194386 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:40:40.572145 1194386 cni.go:84] Creating CNI manager for ""
	I0731 22:40:40.572161 1194386 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0731 22:40:40.572169 1194386 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0731 22:40:40.572225 1194386 start.go:340] cluster config:
	{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0731 22:40:40.572324 1194386 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:40:40.574153 1194386 out.go:177] * Starting "ha-150891" primary control-plane node in "ha-150891" cluster
	I0731 22:40:40.575282 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:40:40.575322 1194386 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 22:40:40.575333 1194386 cache.go:56] Caching tarball of preloaded images
	I0731 22:40:40.575419 1194386 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:40:40.575430 1194386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:40:40.575725 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:40:40.575747 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json: {Name:mk9638a254245e6b064f22970f1f8c3c5e0311aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:40:40.575883 1194386 start.go:360] acquireMachinesLock for ha-150891: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:40:40.575912 1194386 start.go:364] duration metric: took 15.828µs to acquireMachinesLock for "ha-150891"
	I0731 22:40:40.575929 1194386 start.go:93] Provisioning new machine with config: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:40:40.575992 1194386 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 22:40:40.578292 1194386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:40:40.578436 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:40:40.578493 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:40:40.594322 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0731 22:40:40.594834 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:40:40.595361 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:40:40.595386 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:40:40.595699 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:40:40.595878 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:40:40.596066 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:40:40.596248 1194386 start.go:159] libmachine.API.Create for "ha-150891" (driver="kvm2")
	I0731 22:40:40.596282 1194386 client.go:168] LocalClient.Create starting
	I0731 22:40:40.596314 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 22:40:40.596345 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:40:40.596358 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:40:40.596402 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 22:40:40.596420 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:40:40.596431 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:40:40.596446 1194386 main.go:141] libmachine: Running pre-create checks...
	I0731 22:40:40.596455 1194386 main.go:141] libmachine: (ha-150891) Calling .PreCreateCheck
	I0731 22:40:40.596780 1194386 main.go:141] libmachine: (ha-150891) Calling .GetConfigRaw
	I0731 22:40:40.597147 1194386 main.go:141] libmachine: Creating machine...
	I0731 22:40:40.597160 1194386 main.go:141] libmachine: (ha-150891) Calling .Create
	I0731 22:40:40.597284 1194386 main.go:141] libmachine: (ha-150891) Creating KVM machine...
	I0731 22:40:40.598730 1194386 main.go:141] libmachine: (ha-150891) DBG | found existing default KVM network
	I0731 22:40:40.599631 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:40.599448 1194409 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f350}
	I0731 22:40:40.599656 1194386 main.go:141] libmachine: (ha-150891) DBG | created network xml: 
	I0731 22:40:40.599671 1194386 main.go:141] libmachine: (ha-150891) DBG | <network>
	I0731 22:40:40.599683 1194386 main.go:141] libmachine: (ha-150891) DBG |   <name>mk-ha-150891</name>
	I0731 22:40:40.599691 1194386 main.go:141] libmachine: (ha-150891) DBG |   <dns enable='no'/>
	I0731 22:40:40.599702 1194386 main.go:141] libmachine: (ha-150891) DBG |   
	I0731 22:40:40.599718 1194386 main.go:141] libmachine: (ha-150891) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 22:40:40.599727 1194386 main.go:141] libmachine: (ha-150891) DBG |     <dhcp>
	I0731 22:40:40.599740 1194386 main.go:141] libmachine: (ha-150891) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 22:40:40.599754 1194386 main.go:141] libmachine: (ha-150891) DBG |     </dhcp>
	I0731 22:40:40.599767 1194386 main.go:141] libmachine: (ha-150891) DBG |   </ip>
	I0731 22:40:40.599777 1194386 main.go:141] libmachine: (ha-150891) DBG |   
	I0731 22:40:40.599784 1194386 main.go:141] libmachine: (ha-150891) DBG | </network>
	I0731 22:40:40.599793 1194386 main.go:141] libmachine: (ha-150891) DBG | 
	I0731 22:40:40.604945 1194386 main.go:141] libmachine: (ha-150891) DBG | trying to create private KVM network mk-ha-150891 192.168.39.0/24...
	I0731 22:40:40.675326 1194386 main.go:141] libmachine: (ha-150891) DBG | private KVM network mk-ha-150891 192.168.39.0/24 created
	I0731 22:40:40.675369 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:40.675244 1194409 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:40.675384 1194386 main.go:141] libmachine: (ha-150891) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891 ...
	I0731 22:40:40.675405 1194386 main.go:141] libmachine: (ha-150891) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 22:40:40.675422 1194386 main.go:141] libmachine: (ha-150891) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:40:40.957270 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:40.957094 1194409 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa...
	I0731 22:40:41.156324 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:41.156160 1194409 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/ha-150891.rawdisk...
	I0731 22:40:41.156354 1194386 main.go:141] libmachine: (ha-150891) DBG | Writing magic tar header
	I0731 22:40:41.156365 1194386 main.go:141] libmachine: (ha-150891) DBG | Writing SSH key tar header
	I0731 22:40:41.156373 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:41.156286 1194409 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891 ...
	I0731 22:40:41.156388 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891
	I0731 22:40:41.156487 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 22:40:41.156512 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:41.156521 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891 (perms=drwx------)
	I0731 22:40:41.156533 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 22:40:41.156539 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 22:40:41.156549 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 22:40:41.156558 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 22:40:41.156567 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 22:40:41.156586 1194386 main.go:141] libmachine: (ha-150891) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 22:40:41.156598 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 22:40:41.156603 1194386 main.go:141] libmachine: (ha-150891) Creating domain...
	I0731 22:40:41.156636 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home/jenkins
	I0731 22:40:41.156661 1194386 main.go:141] libmachine: (ha-150891) DBG | Checking permissions on dir: /home
	I0731 22:40:41.156675 1194386 main.go:141] libmachine: (ha-150891) DBG | Skipping /home - not owner
	I0731 22:40:41.157818 1194386 main.go:141] libmachine: (ha-150891) define libvirt domain using xml: 
	I0731 22:40:41.157837 1194386 main.go:141] libmachine: (ha-150891) <domain type='kvm'>
	I0731 22:40:41.157843 1194386 main.go:141] libmachine: (ha-150891)   <name>ha-150891</name>
	I0731 22:40:41.157848 1194386 main.go:141] libmachine: (ha-150891)   <memory unit='MiB'>2200</memory>
	I0731 22:40:41.157856 1194386 main.go:141] libmachine: (ha-150891)   <vcpu>2</vcpu>
	I0731 22:40:41.157864 1194386 main.go:141] libmachine: (ha-150891)   <features>
	I0731 22:40:41.157894 1194386 main.go:141] libmachine: (ha-150891)     <acpi/>
	I0731 22:40:41.157922 1194386 main.go:141] libmachine: (ha-150891)     <apic/>
	I0731 22:40:41.157946 1194386 main.go:141] libmachine: (ha-150891)     <pae/>
	I0731 22:40:41.157977 1194386 main.go:141] libmachine: (ha-150891)     
	I0731 22:40:41.157990 1194386 main.go:141] libmachine: (ha-150891)   </features>
	I0731 22:40:41.158000 1194386 main.go:141] libmachine: (ha-150891)   <cpu mode='host-passthrough'>
	I0731 22:40:41.158011 1194386 main.go:141] libmachine: (ha-150891)   
	I0731 22:40:41.158020 1194386 main.go:141] libmachine: (ha-150891)   </cpu>
	I0731 22:40:41.158030 1194386 main.go:141] libmachine: (ha-150891)   <os>
	I0731 22:40:41.158039 1194386 main.go:141] libmachine: (ha-150891)     <type>hvm</type>
	I0731 22:40:41.158050 1194386 main.go:141] libmachine: (ha-150891)     <boot dev='cdrom'/>
	I0731 22:40:41.158063 1194386 main.go:141] libmachine: (ha-150891)     <boot dev='hd'/>
	I0731 22:40:41.158073 1194386 main.go:141] libmachine: (ha-150891)     <bootmenu enable='no'/>
	I0731 22:40:41.158082 1194386 main.go:141] libmachine: (ha-150891)   </os>
	I0731 22:40:41.158091 1194386 main.go:141] libmachine: (ha-150891)   <devices>
	I0731 22:40:41.158104 1194386 main.go:141] libmachine: (ha-150891)     <disk type='file' device='cdrom'>
	I0731 22:40:41.158113 1194386 main.go:141] libmachine: (ha-150891)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/boot2docker.iso'/>
	I0731 22:40:41.158121 1194386 main.go:141] libmachine: (ha-150891)       <target dev='hdc' bus='scsi'/>
	I0731 22:40:41.158126 1194386 main.go:141] libmachine: (ha-150891)       <readonly/>
	I0731 22:40:41.158134 1194386 main.go:141] libmachine: (ha-150891)     </disk>
	I0731 22:40:41.158144 1194386 main.go:141] libmachine: (ha-150891)     <disk type='file' device='disk'>
	I0731 22:40:41.158170 1194386 main.go:141] libmachine: (ha-150891)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 22:40:41.158194 1194386 main.go:141] libmachine: (ha-150891)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/ha-150891.rawdisk'/>
	I0731 22:40:41.158208 1194386 main.go:141] libmachine: (ha-150891)       <target dev='hda' bus='virtio'/>
	I0731 22:40:41.158218 1194386 main.go:141] libmachine: (ha-150891)     </disk>
	I0731 22:40:41.158230 1194386 main.go:141] libmachine: (ha-150891)     <interface type='network'>
	I0731 22:40:41.158242 1194386 main.go:141] libmachine: (ha-150891)       <source network='mk-ha-150891'/>
	I0731 22:40:41.158260 1194386 main.go:141] libmachine: (ha-150891)       <model type='virtio'/>
	I0731 22:40:41.158277 1194386 main.go:141] libmachine: (ha-150891)     </interface>
	I0731 22:40:41.158295 1194386 main.go:141] libmachine: (ha-150891)     <interface type='network'>
	I0731 22:40:41.158312 1194386 main.go:141] libmachine: (ha-150891)       <source network='default'/>
	I0731 22:40:41.158323 1194386 main.go:141] libmachine: (ha-150891)       <model type='virtio'/>
	I0731 22:40:41.158333 1194386 main.go:141] libmachine: (ha-150891)     </interface>
	I0731 22:40:41.158344 1194386 main.go:141] libmachine: (ha-150891)     <serial type='pty'>
	I0731 22:40:41.158352 1194386 main.go:141] libmachine: (ha-150891)       <target port='0'/>
	I0731 22:40:41.158357 1194386 main.go:141] libmachine: (ha-150891)     </serial>
	I0731 22:40:41.158364 1194386 main.go:141] libmachine: (ha-150891)     <console type='pty'>
	I0731 22:40:41.158370 1194386 main.go:141] libmachine: (ha-150891)       <target type='serial' port='0'/>
	I0731 22:40:41.158377 1194386 main.go:141] libmachine: (ha-150891)     </console>
	I0731 22:40:41.158382 1194386 main.go:141] libmachine: (ha-150891)     <rng model='virtio'>
	I0731 22:40:41.158393 1194386 main.go:141] libmachine: (ha-150891)       <backend model='random'>/dev/random</backend>
	I0731 22:40:41.158409 1194386 main.go:141] libmachine: (ha-150891)     </rng>
	I0731 22:40:41.158425 1194386 main.go:141] libmachine: (ha-150891)     
	I0731 22:40:41.158437 1194386 main.go:141] libmachine: (ha-150891)     
	I0731 22:40:41.158446 1194386 main.go:141] libmachine: (ha-150891)   </devices>
	I0731 22:40:41.158457 1194386 main.go:141] libmachine: (ha-150891) </domain>
	I0731 22:40:41.158465 1194386 main.go:141] libmachine: (ha-150891) 
	I0731 22:40:41.162729 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:b7:c5:0e in network default
	I0731 22:40:41.163316 1194386 main.go:141] libmachine: (ha-150891) Ensuring networks are active...
	I0731 22:40:41.163335 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:41.163965 1194386 main.go:141] libmachine: (ha-150891) Ensuring network default is active
	I0731 22:40:41.164277 1194386 main.go:141] libmachine: (ha-150891) Ensuring network mk-ha-150891 is active
	I0731 22:40:41.164795 1194386 main.go:141] libmachine: (ha-150891) Getting domain xml...
	I0731 22:40:41.165491 1194386 main.go:141] libmachine: (ha-150891) Creating domain...
	I0731 22:40:42.383940 1194386 main.go:141] libmachine: (ha-150891) Waiting to get IP...
	I0731 22:40:42.384727 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:42.385076 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:42.385112 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:42.385050 1194409 retry.go:31] will retry after 303.270484ms: waiting for machine to come up
	I0731 22:40:42.690183 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:42.690649 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:42.690673 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:42.690608 1194409 retry.go:31] will retry after 318.522166ms: waiting for machine to come up
	I0731 22:40:43.011209 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:43.011564 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:43.011603 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:43.011546 1194409 retry.go:31] will retry after 482.718271ms: waiting for machine to come up
	I0731 22:40:43.496168 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:43.496531 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:43.496561 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:43.496474 1194409 retry.go:31] will retry after 430.6903ms: waiting for machine to come up
	I0731 22:40:43.929145 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:43.929597 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:43.929618 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:43.929547 1194409 retry.go:31] will retry after 659.092465ms: waiting for machine to come up
	I0731 22:40:44.590408 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:44.590821 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:44.590849 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:44.590777 1194409 retry.go:31] will retry after 721.169005ms: waiting for machine to come up
	I0731 22:40:45.313753 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:45.314240 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:45.314271 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:45.314183 1194409 retry.go:31] will retry after 721.182405ms: waiting for machine to come up
	I0731 22:40:46.036604 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:46.037080 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:46.037108 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:46.037030 1194409 retry.go:31] will retry after 950.144159ms: waiting for machine to come up
	I0731 22:40:46.989140 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:46.989471 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:46.989495 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:46.989422 1194409 retry.go:31] will retry after 1.605315848s: waiting for machine to come up
	I0731 22:40:48.597253 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:48.597680 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:48.597714 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:48.597629 1194409 retry.go:31] will retry after 1.497155047s: waiting for machine to come up
	I0731 22:40:50.097369 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:50.097837 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:50.097894 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:50.097827 1194409 retry.go:31] will retry after 1.906642059s: waiting for machine to come up
	I0731 22:40:52.006830 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:52.007200 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:52.007231 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:52.007157 1194409 retry.go:31] will retry after 3.526118614s: waiting for machine to come up
	I0731 22:40:55.537756 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:55.538179 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:55.538203 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:55.538137 1194409 retry.go:31] will retry after 3.929909401s: waiting for machine to come up
	I0731 22:40:59.469246 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:40:59.469664 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find current IP address of domain ha-150891 in network mk-ha-150891
	I0731 22:40:59.469685 1194386 main.go:141] libmachine: (ha-150891) DBG | I0731 22:40:59.469620 1194409 retry.go:31] will retry after 4.739931386s: waiting for machine to come up
	I0731 22:41:04.213465 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.213947 1194386 main.go:141] libmachine: (ha-150891) Found IP for machine: 192.168.39.105
	I0731 22:41:04.213969 1194386 main.go:141] libmachine: (ha-150891) Reserving static IP address...
	I0731 22:41:04.213988 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has current primary IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.214287 1194386 main.go:141] libmachine: (ha-150891) DBG | unable to find host DHCP lease matching {name: "ha-150891", mac: "52:54:00:5d:5d:f5", ip: "192.168.39.105"} in network mk-ha-150891
	I0731 22:41:04.296679 1194386 main.go:141] libmachine: (ha-150891) DBG | Getting to WaitForSSH function...
	I0731 22:41:04.296713 1194386 main.go:141] libmachine: (ha-150891) Reserved static IP address: 192.168.39.105
	I0731 22:41:04.296727 1194386 main.go:141] libmachine: (ha-150891) Waiting for SSH to be available...
	I0731 22:41:04.299421 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.299881 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.299928 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.300075 1194386 main.go:141] libmachine: (ha-150891) DBG | Using SSH client type: external
	I0731 22:41:04.300113 1194386 main.go:141] libmachine: (ha-150891) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa (-rw-------)
	I0731 22:41:04.300146 1194386 main.go:141] libmachine: (ha-150891) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 22:41:04.300160 1194386 main.go:141] libmachine: (ha-150891) DBG | About to run SSH command:
	I0731 22:41:04.300173 1194386 main.go:141] libmachine: (ha-150891) DBG | exit 0
	I0731 22:41:04.427992 1194386 main.go:141] libmachine: (ha-150891) DBG | SSH cmd err, output: <nil>: 
	I0731 22:41:04.428256 1194386 main.go:141] libmachine: (ha-150891) KVM machine creation complete!
	I0731 22:41:04.428576 1194386 main.go:141] libmachine: (ha-150891) Calling .GetConfigRaw
	I0731 22:41:04.429106 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:04.429317 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:04.429459 1194386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 22:41:04.429475 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:04.430805 1194386 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 22:41:04.430829 1194386 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 22:41:04.430836 1194386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 22:41:04.430845 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.433301 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.433677 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.433694 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.433869 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.434068 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.434240 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.434401 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.434559 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.434796 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.434811 1194386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 22:41:04.543286 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:41:04.543317 1194386 main.go:141] libmachine: Detecting the provisioner...
	I0731 22:41:04.543326 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.546258 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.546597 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.546629 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.546765 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.546976 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.547150 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.547289 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.547442 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.547635 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.547648 1194386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 22:41:04.656499 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 22:41:04.656634 1194386 main.go:141] libmachine: found compatible host: buildroot
	I0731 22:41:04.656650 1194386 main.go:141] libmachine: Provisioning with buildroot...
	I0731 22:41:04.656665 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:41:04.656948 1194386 buildroot.go:166] provisioning hostname "ha-150891"
	I0731 22:41:04.656979 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:41:04.657174 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.659719 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.660076 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.660120 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.660289 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.660494 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.660667 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.660801 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.660968 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.661150 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.661164 1194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891 && echo "ha-150891" | sudo tee /etc/hostname
	I0731 22:41:04.784816 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891
	
	I0731 22:41:04.784860 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.787627 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.788011 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.788044 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.788224 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:04.788425 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.788568 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:04.788752 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:04.788919 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:04.789126 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:04.789146 1194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:41:04.908378 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:41:04.908418 1194386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:41:04.908449 1194386 buildroot.go:174] setting up certificates
	I0731 22:41:04.908465 1194386 provision.go:84] configureAuth start
	I0731 22:41:04.908480 1194386 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:41:04.908761 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:04.911505 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.911830 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.911848 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.912008 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:04.913965 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.914247 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:04.914274 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:04.914419 1194386 provision.go:143] copyHostCerts
	I0731 22:41:04.914453 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:41:04.914486 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:41:04.914495 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:41:04.914560 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:41:04.914640 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:41:04.914657 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:41:04.914663 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:41:04.914688 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:41:04.914731 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:41:04.914747 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:41:04.914753 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:41:04.914773 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:41:04.914833 1194386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891 san=[127.0.0.1 192.168.39.105 ha-150891 localhost minikube]
	I0731 22:41:05.110288 1194386 provision.go:177] copyRemoteCerts
	I0731 22:41:05.110350 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:41:05.110378 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.112979 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.113348 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.113379 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.113551 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.113746 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.113889 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.114015 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.197429 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:41:05.197521 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:41:05.221033 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:41:05.221124 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 22:41:05.249459 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:41:05.249538 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:41:05.272106 1194386 provision.go:87] duration metric: took 363.612751ms to configureAuth
	I0731 22:41:05.272136 1194386 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:41:05.272326 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:41:05.272419 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.275035 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.275336 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.275360 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.275541 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.275728 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.275885 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.276008 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.276163 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:05.276381 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:05.276402 1194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:41:05.545956 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:41:05.545991 1194386 main.go:141] libmachine: Checking connection to Docker...
	I0731 22:41:05.546000 1194386 main.go:141] libmachine: (ha-150891) Calling .GetURL
	I0731 22:41:05.547315 1194386 main.go:141] libmachine: (ha-150891) DBG | Using libvirt version 6000000
	I0731 22:41:05.549542 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.549911 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.549938 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.550147 1194386 main.go:141] libmachine: Docker is up and running!
	I0731 22:41:05.550165 1194386 main.go:141] libmachine: Reticulating splines...
	I0731 22:41:05.550172 1194386 client.go:171] duration metric: took 24.953879283s to LocalClient.Create
	I0731 22:41:05.550202 1194386 start.go:167] duration metric: took 24.953948776s to libmachine.API.Create "ha-150891"
	I0731 22:41:05.550215 1194386 start.go:293] postStartSetup for "ha-150891" (driver="kvm2")
	I0731 22:41:05.550228 1194386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:41:05.550253 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.550518 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:41:05.550546 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.552887 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.553264 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.553293 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.553427 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.553646 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.553821 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.553927 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.638306 1194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:41:05.642316 1194386 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:41:05.642356 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:41:05.642476 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:41:05.642578 1194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:41:05.642592 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:41:05.642713 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:41:05.652005 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:41:05.674443 1194386 start.go:296] duration metric: took 124.211165ms for postStartSetup
	I0731 22:41:05.674517 1194386 main.go:141] libmachine: (ha-150891) Calling .GetConfigRaw
	I0731 22:41:05.675191 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:05.677842 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.678312 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.678341 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.678593 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:41:05.678780 1194386 start.go:128] duration metric: took 25.102776872s to createHost
	I0731 22:41:05.678802 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.681108 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.681384 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.681417 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.681567 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.681768 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.681945 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.682076 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.682248 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:41:05.682469 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:41:05.682488 1194386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:41:05.792420 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465665.770360467
	
	I0731 22:41:05.792448 1194386 fix.go:216] guest clock: 1722465665.770360467
	I0731 22:41:05.792459 1194386 fix.go:229] Guest: 2024-07-31 22:41:05.770360467 +0000 UTC Remote: 2024-07-31 22:41:05.678790863 +0000 UTC m=+25.213575611 (delta=91.569604ms)
	I0731 22:41:05.792518 1194386 fix.go:200] guest clock delta is within tolerance: 91.569604ms
	I0731 22:41:05.792524 1194386 start.go:83] releasing machines lock for "ha-150891", held for 25.216603122s
	I0731 22:41:05.792556 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.792900 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:05.795610 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.795928 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.795974 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.796125 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.796595 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.796792 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:05.796889 1194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:41:05.796934 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.797041 1194386 ssh_runner.go:195] Run: cat /version.json
	I0731 22:41:05.797065 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:05.799703 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800032 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.800061 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800082 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800188 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.800404 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.800495 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:05.800514 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:05.800571 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.800664 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:05.800790 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:05.800772 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.800930 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:05.801081 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:05.920240 1194386 ssh_runner.go:195] Run: systemctl --version
	I0731 22:41:05.925953 1194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:41:06.082497 1194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:41:06.087909 1194386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:41:06.087979 1194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:41:06.103788 1194386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:41:06.103818 1194386 start.go:495] detecting cgroup driver to use...
	I0731 22:41:06.103884 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:41:06.119532 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:41:06.133685 1194386 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:41:06.133744 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:41:06.147619 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:41:06.161135 1194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:41:06.282997 1194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:41:06.434011 1194386 docker.go:233] disabling docker service ...
	I0731 22:41:06.434099 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:41:06.448041 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:41:06.460849 1194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:41:06.592412 1194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:41:06.714931 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:41:06.729443 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:41:06.747342 1194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:41:06.747405 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.757370 1194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:41:06.757454 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.767795 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.777947 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.788189 1194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:41:06.798625 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.808841 1194386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.825259 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:41:06.835757 1194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:41:06.845132 1194386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 22:41:06.845200 1194386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 22:41:06.858527 1194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:41:06.868444 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:41:06.983481 1194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:41:07.126787 1194386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:41:07.126858 1194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:41:07.131508 1194386 start.go:563] Will wait 60s for crictl version
	I0731 22:41:07.131564 1194386 ssh_runner.go:195] Run: which crictl
	I0731 22:41:07.135221 1194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:41:07.171263 1194386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:41:07.171349 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:41:07.197291 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:41:07.225531 1194386 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:41:07.227103 1194386 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:41:07.229913 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:07.230265 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:07.230294 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:07.230510 1194386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:41:07.234402 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:41:07.246522 1194386 kubeadm.go:883] updating cluster {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:41:07.246680 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:41:07.246750 1194386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:41:07.277126 1194386 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0731 22:41:07.277206 1194386 ssh_runner.go:195] Run: which lz4
	I0731 22:41:07.280976 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0731 22:41:07.281081 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 22:41:07.285018 1194386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 22:41:07.285055 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0731 22:41:08.624169 1194386 crio.go:462] duration metric: took 1.343113145s to copy over tarball
	I0731 22:41:08.624241 1194386 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 22:41:10.788346 1194386 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.164070863s)
	I0731 22:41:10.788383 1194386 crio.go:469] duration metric: took 2.164182212s to extract the tarball
	I0731 22:41:10.788394 1194386 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 22:41:10.825709 1194386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:41:10.873399 1194386 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:41:10.873429 1194386 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:41:10.873440 1194386 kubeadm.go:934] updating node { 192.168.39.105 8443 v1.30.3 crio true true} ...
	I0731 22:41:10.873580 1194386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:41:10.873654 1194386 ssh_runner.go:195] Run: crio config
	I0731 22:41:10.916824 1194386 cni.go:84] Creating CNI manager for ""
	I0731 22:41:10.916846 1194386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:41:10.916858 1194386 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:41:10.916881 1194386 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-150891 NodeName:ha-150891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:41:10.917021 1194386 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-150891"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:41:10.917046 1194386 kube-vip.go:115] generating kube-vip config ...
	I0731 22:41:10.917090 1194386 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:41:10.932857 1194386 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:41:10.932998 1194386 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:41:10.933078 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:41:10.942834 1194386 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:41:10.942932 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 22:41:10.952719 1194386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 22:41:10.969180 1194386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:41:10.985491 1194386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 22:41:11.001705 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0731 22:41:11.018193 1194386 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:41:11.021800 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:41:11.033871 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:41:11.158730 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:41:11.175706 1194386 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.105
	I0731 22:41:11.175736 1194386 certs.go:194] generating shared ca certs ...
	I0731 22:41:11.175758 1194386 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.175968 1194386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:41:11.176025 1194386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:41:11.176038 1194386 certs.go:256] generating profile certs ...
	I0731 22:41:11.176134 1194386 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:41:11.176155 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt with IP's: []
	I0731 22:41:11.342866 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt ...
	I0731 22:41:11.342898 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt: {Name:mka7ac5725d8bbe92340ca35d53fce869b691752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.343080 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key ...
	I0731 22:41:11.343092 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key: {Name:mk2dbd419cac26e8d9b1d180d735f6df2973a848 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.343170 1194386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23
	I0731 22:41:11.343186 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.254]
	I0731 22:41:11.446273 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23 ...
	I0731 22:41:11.446307 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23: {Name:mk1d553d14c68d12e4fbac01a9a120a94f6e845a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.446479 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23 ...
	I0731 22:41:11.446494 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23: {Name:mkcc3095f5ddb4b2831a10534845e98d0392f0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.446572 1194386 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.3d819f23 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:41:11.446650 1194386 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.3d819f23 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:41:11.446709 1194386 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:41:11.446724 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt with IP's: []
	I0731 22:41:11.684370 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt ...
	I0731 22:41:11.684408 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt: {Name:mk9556239b50cd6cb62e7d5272ceeed0a2985331 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.684590 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key ...
	I0731 22:41:11.684601 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key: {Name:mkb90591deb06e12c16008f6a11dd2ff071a9c50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:11.684673 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:41:11.684694 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:41:11.684708 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:41:11.684721 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:41:11.684739 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:41:11.684753 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:41:11.684765 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:41:11.684777 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:41:11.684833 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:41:11.684872 1194386 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:41:11.684879 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:41:11.684899 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:41:11.684921 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:41:11.684945 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:41:11.684996 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:41:11.685029 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:11.685044 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:41:11.685056 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:41:11.685576 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:41:11.710782 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:41:11.733663 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:41:11.756970 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:41:11.781366 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 22:41:11.805830 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:41:11.830355 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:41:11.856325 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:41:11.880167 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:41:11.903616 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:41:11.931832 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:41:11.958758 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:41:11.977199 1194386 ssh_runner.go:195] Run: openssl version
	I0731 22:41:11.983009 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:41:11.998583 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:12.003074 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:12.003136 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:41:12.008826 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:41:12.019438 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:41:12.029943 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:41:12.034176 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:41:12.034238 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:41:12.039750 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:41:12.050157 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:41:12.060658 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:41:12.065078 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:41:12.065153 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:41:12.070665 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:41:12.081211 1194386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:41:12.085259 1194386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:41:12.085320 1194386 kubeadm.go:392] StartCluster: {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:41:12.085423 1194386 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 22:41:12.085475 1194386 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 22:41:12.125444 1194386 cri.go:89] found id: ""
	I0731 22:41:12.125526 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 22:41:12.135046 1194386 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 22:41:12.145651 1194386 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 22:41:12.157866 1194386 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 22:41:12.157887 1194386 kubeadm.go:157] found existing configuration files:
	
	I0731 22:41:12.157933 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 22:41:12.166742 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 22:41:12.166808 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 22:41:12.176351 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 22:41:12.185445 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 22:41:12.185530 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 22:41:12.194673 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 22:41:12.203308 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 22:41:12.203375 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 22:41:12.212579 1194386 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 22:41:12.221043 1194386 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 22:41:12.221110 1194386 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 22:41:12.230240 1194386 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 22:41:12.337139 1194386 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0731 22:41:12.337231 1194386 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 22:41:12.454022 1194386 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 22:41:12.454122 1194386 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 22:41:12.454202 1194386 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 22:41:12.651958 1194386 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 22:41:12.820071 1194386 out.go:204]   - Generating certificates and keys ...
	I0731 22:41:12.820207 1194386 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 22:41:12.820294 1194386 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 22:41:12.820392 1194386 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 22:41:13.110139 1194386 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 22:41:13.216541 1194386 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 22:41:13.411109 1194386 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 22:41:13.619081 1194386 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 22:41:13.619351 1194386 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-150891 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0731 22:41:13.808874 1194386 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 22:41:13.809040 1194386 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-150891 localhost] and IPs [192.168.39.105 127.0.0.1 ::1]
	I0731 22:41:13.899652 1194386 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 22:41:14.212030 1194386 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 22:41:14.277510 1194386 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 22:41:14.277689 1194386 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 22:41:14.357327 1194386 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 22:41:14.457066 1194386 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0731 22:41:14.586947 1194386 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 22:41:14.708144 1194386 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 22:41:14.897018 1194386 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 22:41:14.897969 1194386 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 22:41:14.902912 1194386 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 22:41:14.904887 1194386 out.go:204]   - Booting up control plane ...
	I0731 22:41:14.905023 1194386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 22:41:14.905165 1194386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 22:41:14.905736 1194386 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 22:41:14.920678 1194386 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 22:41:14.921853 1194386 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 22:41:14.921920 1194386 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 22:41:15.049437 1194386 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0731 22:41:15.049548 1194386 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0731 22:41:16.550294 1194386 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501714672s
	I0731 22:41:16.550407 1194386 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0731 22:41:22.251964 1194386 kubeadm.go:310] [api-check] The API server is healthy after 5.704368146s
	I0731 22:41:22.264322 1194386 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0731 22:41:22.281315 1194386 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0731 22:41:22.321405 1194386 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0731 22:41:22.321586 1194386 kubeadm.go:310] [mark-control-plane] Marking the node ha-150891 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0731 22:41:22.335168 1194386 kubeadm.go:310] [bootstrap-token] Using token: x6vrvl.scxwa3uy3g8m39yp
	I0731 22:41:22.336566 1194386 out.go:204]   - Configuring RBAC rules ...
	I0731 22:41:22.336714 1194386 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0731 22:41:22.344044 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0731 22:41:22.352698 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0731 22:41:22.357423 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0731 22:41:22.362009 1194386 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0731 22:41:22.370027 1194386 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0731 22:41:22.658209 1194386 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0731 22:41:23.098221 1194386 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0731 22:41:23.659119 1194386 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0731 22:41:23.660605 1194386 kubeadm.go:310] 
	I0731 22:41:23.660707 1194386 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0731 22:41:23.660718 1194386 kubeadm.go:310] 
	I0731 22:41:23.660809 1194386 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0731 22:41:23.660818 1194386 kubeadm.go:310] 
	I0731 22:41:23.660854 1194386 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0731 22:41:23.660944 1194386 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0731 22:41:23.661006 1194386 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0731 22:41:23.661016 1194386 kubeadm.go:310] 
	I0731 22:41:23.661087 1194386 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0731 22:41:23.661095 1194386 kubeadm.go:310] 
	I0731 22:41:23.661163 1194386 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0731 22:41:23.661178 1194386 kubeadm.go:310] 
	I0731 22:41:23.661254 1194386 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0731 22:41:23.661368 1194386 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0731 22:41:23.661449 1194386 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0731 22:41:23.661456 1194386 kubeadm.go:310] 
	I0731 22:41:23.661542 1194386 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0731 22:41:23.661673 1194386 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0731 22:41:23.661696 1194386 kubeadm.go:310] 
	I0731 22:41:23.661818 1194386 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token x6vrvl.scxwa3uy3g8m39yp \
	I0731 22:41:23.661947 1194386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef \
	I0731 22:41:23.661971 1194386 kubeadm.go:310] 	--control-plane 
	I0731 22:41:23.661975 1194386 kubeadm.go:310] 
	I0731 22:41:23.662048 1194386 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0731 22:41:23.662054 1194386 kubeadm.go:310] 
	I0731 22:41:23.662122 1194386 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token x6vrvl.scxwa3uy3g8m39yp \
	I0731 22:41:23.662237 1194386 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef 
	I0731 22:41:23.662727 1194386 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 22:41:23.662761 1194386 cni.go:84] Creating CNI manager for ""
	I0731 22:41:23.662769 1194386 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0731 22:41:23.664440 1194386 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0731 22:41:23.665937 1194386 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0731 22:41:23.671296 1194386 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0731 22:41:23.671319 1194386 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0731 22:41:23.688520 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0731 22:41:24.012012 1194386 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 22:41:24.012150 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:24.012216 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-150891 minikube.k8s.io/updated_at=2024_07_31T22_41_24_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-150891 minikube.k8s.io/primary=true
	I0731 22:41:24.030175 1194386 ops.go:34] apiserver oom_adj: -16
	I0731 22:41:24.209920 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:24.710079 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:25.210961 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:25.710439 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:26.210064 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:26.710701 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:27.210208 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:27.710168 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:28.210619 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:28.710698 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:29.210551 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:29.710836 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:30.210738 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:30.710521 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:31.210064 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:31.709968 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:32.210941 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:32.710909 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:33.210335 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:33.710327 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:34.210666 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:34.710753 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:35.210848 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:35.710898 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:36.210690 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0731 22:41:36.332701 1194386 kubeadm.go:1113] duration metric: took 12.320634484s to wait for elevateKubeSystemPrivileges
	I0731 22:41:36.332742 1194386 kubeadm.go:394] duration metric: took 24.247425712s to StartCluster
	I0731 22:41:36.332762 1194386 settings.go:142] acquiring lock: {Name:mk076897bfd1af81579aafbccfd5a932e011b343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:36.332873 1194386 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:41:36.333675 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:41:36.333909 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0731 22:41:36.333919 1194386 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:41:36.333946 1194386 start.go:241] waiting for startup goroutines ...
	I0731 22:41:36.333961 1194386 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 22:41:36.334019 1194386 addons.go:69] Setting storage-provisioner=true in profile "ha-150891"
	I0731 22:41:36.334029 1194386 addons.go:69] Setting default-storageclass=true in profile "ha-150891"
	I0731 22:41:36.334071 1194386 addons.go:234] Setting addon storage-provisioner=true in "ha-150891"
	I0731 22:41:36.334110 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:41:36.334156 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:41:36.334072 1194386 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-150891"
	I0731 22:41:36.335195 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.335272 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.336262 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.336763 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.351259 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I0731 22:41:36.351773 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.352285 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.352316 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.352680 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.352903 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:36.355520 1194386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:41:36.355877 1194386 kapi.go:59] client config for ha-150891: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 22:41:36.356454 1194386 cert_rotation.go:137] Starting client certificate rotation controller
	I0731 22:41:36.356689 1194386 addons.go:234] Setting addon default-storageclass=true in "ha-150891"
	I0731 22:41:36.356731 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:41:36.357113 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.357131 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.357957 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0731 22:41:36.358483 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.359098 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.359125 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.359473 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.360078 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.360142 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.373982 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41647
	I0731 22:41:36.374600 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.375102 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.375129 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.375493 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.376053 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0731 22:41:36.376316 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:36.376369 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:36.376516 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.377000 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.377021 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.377391 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.377583 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:36.379506 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:36.381356 1194386 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 22:41:36.382619 1194386 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:41:36.382642 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 22:41:36.382664 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:36.386025 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.386516 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:36.386541 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.386731 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:36.386954 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:36.387136 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:36.387266 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:36.394148 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I0731 22:41:36.394615 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:36.395123 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:36.395145 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:36.395468 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:36.395664 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:41:36.397216 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:41:36.397444 1194386 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 22:41:36.397458 1194386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 22:41:36.397472 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:41:36.400128 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.400612 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:41:36.400633 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:41:36.400866 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:41:36.401035 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:41:36.401217 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:41:36.401326 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:41:36.482204 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0731 22:41:36.533458 1194386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 22:41:36.591754 1194386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 22:41:36.977914 1194386 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0731 22:41:37.231623 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.231654 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.231701 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.231726 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.231984 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.232023 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232031 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232040 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.232051 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.232105 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232119 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232128 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.232173 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.232244 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.232346 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232354 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.232361 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232502 1194386 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0731 22:41:37.232513 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.232518 1194386 round_trippers.go:469] Request Headers:
	I0731 22:41:37.232526 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.232542 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:41:37.232552 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:41:37.248384 1194386 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0731 22:41:37.249211 1194386 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0731 22:41:37.249230 1194386 round_trippers.go:469] Request Headers:
	I0731 22:41:37.249242 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:41:37.249249 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:41:37.249256 1194386 round_trippers.go:473]     Content-Type: application/json
	I0731 22:41:37.253415 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:41:37.253901 1194386 main.go:141] libmachine: Making call to close driver server
	I0731 22:41:37.253915 1194386 main.go:141] libmachine: (ha-150891) Calling .Close
	I0731 22:41:37.254222 1194386 main.go:141] libmachine: Successfully made call to close driver server
	I0731 22:41:37.254237 1194386 main.go:141] libmachine: (ha-150891) DBG | Closing plugin on server side
	I0731 22:41:37.254244 1194386 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 22:41:37.256161 1194386 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 22:41:37.257403 1194386 addons.go:510] duration metric: took 923.440407ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 22:41:37.257451 1194386 start.go:246] waiting for cluster config update ...
	I0731 22:41:37.257466 1194386 start.go:255] writing updated cluster config ...
	I0731 22:41:37.259122 1194386 out.go:177] 
	I0731 22:41:37.260573 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:41:37.260653 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:41:37.262170 1194386 out.go:177] * Starting "ha-150891-m02" control-plane node in "ha-150891" cluster
	I0731 22:41:37.263347 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:41:37.263376 1194386 cache.go:56] Caching tarball of preloaded images
	I0731 22:41:37.263489 1194386 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:41:37.263501 1194386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:41:37.263567 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:41:37.263750 1194386 start.go:360] acquireMachinesLock for ha-150891-m02: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:41:37.263792 1194386 start.go:364] duration metric: took 23.392µs to acquireMachinesLock for "ha-150891-m02"
	I0731 22:41:37.263809 1194386 start.go:93] Provisioning new machine with config: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:41:37.263902 1194386 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0731 22:41:37.265399 1194386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:41:37.265485 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:41:37.265511 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:41:37.281435 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45467
	I0731 22:41:37.281916 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:41:37.282361 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:41:37.282382 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:41:37.282815 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:41:37.283049 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:41:37.283211 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:41:37.283390 1194386 start.go:159] libmachine.API.Create for "ha-150891" (driver="kvm2")
	I0731 22:41:37.283418 1194386 client.go:168] LocalClient.Create starting
	I0731 22:41:37.283458 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 22:41:37.283500 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:41:37.283520 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:41:37.283591 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 22:41:37.283622 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:41:37.283638 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:41:37.283660 1194386 main.go:141] libmachine: Running pre-create checks...
	I0731 22:41:37.283671 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .PreCreateCheck
	I0731 22:41:37.283878 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetConfigRaw
	I0731 22:41:37.284351 1194386 main.go:141] libmachine: Creating machine...
	I0731 22:41:37.284371 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .Create
	I0731 22:41:37.284521 1194386 main.go:141] libmachine: (ha-150891-m02) Creating KVM machine...
	I0731 22:41:37.285838 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found existing default KVM network
	I0731 22:41:37.285981 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found existing private KVM network mk-ha-150891
	I0731 22:41:37.286143 1194386 main.go:141] libmachine: (ha-150891-m02) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02 ...
	I0731 22:41:37.286168 1194386 main.go:141] libmachine: (ha-150891-m02) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 22:41:37.286228 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.286125 1194775 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:41:37.286362 1194386 main.go:141] libmachine: (ha-150891-m02) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:41:37.559348 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.559213 1194775 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa...
	I0731 22:41:37.747723 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.747586 1194775 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/ha-150891-m02.rawdisk...
	I0731 22:41:37.747751 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Writing magic tar header
	I0731 22:41:37.747761 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Writing SSH key tar header
	I0731 22:41:37.747769 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:37.747733 1194775 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02 ...
	I0731 22:41:37.747892 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02
	I0731 22:41:37.747917 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 22:41:37.747930 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02 (perms=drwx------)
	I0731 22:41:37.747945 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 22:41:37.747956 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:41:37.747967 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 22:41:37.747980 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 22:41:37.747990 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 22:41:37.747998 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 22:41:37.748005 1194386 main.go:141] libmachine: (ha-150891-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 22:41:37.748017 1194386 main.go:141] libmachine: (ha-150891-m02) Creating domain...
	I0731 22:41:37.748033 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 22:41:37.748045 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home/jenkins
	I0731 22:41:37.748054 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Checking permissions on dir: /home
	I0731 22:41:37.748063 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Skipping /home - not owner
	I0731 22:41:37.749054 1194386 main.go:141] libmachine: (ha-150891-m02) define libvirt domain using xml: 
	I0731 22:41:37.749082 1194386 main.go:141] libmachine: (ha-150891-m02) <domain type='kvm'>
	I0731 22:41:37.749094 1194386 main.go:141] libmachine: (ha-150891-m02)   <name>ha-150891-m02</name>
	I0731 22:41:37.749102 1194386 main.go:141] libmachine: (ha-150891-m02)   <memory unit='MiB'>2200</memory>
	I0731 22:41:37.749111 1194386 main.go:141] libmachine: (ha-150891-m02)   <vcpu>2</vcpu>
	I0731 22:41:37.749121 1194386 main.go:141] libmachine: (ha-150891-m02)   <features>
	I0731 22:41:37.749126 1194386 main.go:141] libmachine: (ha-150891-m02)     <acpi/>
	I0731 22:41:37.749131 1194386 main.go:141] libmachine: (ha-150891-m02)     <apic/>
	I0731 22:41:37.749137 1194386 main.go:141] libmachine: (ha-150891-m02)     <pae/>
	I0731 22:41:37.749141 1194386 main.go:141] libmachine: (ha-150891-m02)     
	I0731 22:41:37.749152 1194386 main.go:141] libmachine: (ha-150891-m02)   </features>
	I0731 22:41:37.749160 1194386 main.go:141] libmachine: (ha-150891-m02)   <cpu mode='host-passthrough'>
	I0731 22:41:37.749165 1194386 main.go:141] libmachine: (ha-150891-m02)   
	I0731 22:41:37.749169 1194386 main.go:141] libmachine: (ha-150891-m02)   </cpu>
	I0731 22:41:37.749174 1194386 main.go:141] libmachine: (ha-150891-m02)   <os>
	I0731 22:41:37.749179 1194386 main.go:141] libmachine: (ha-150891-m02)     <type>hvm</type>
	I0731 22:41:37.749210 1194386 main.go:141] libmachine: (ha-150891-m02)     <boot dev='cdrom'/>
	I0731 22:41:37.749238 1194386 main.go:141] libmachine: (ha-150891-m02)     <boot dev='hd'/>
	I0731 22:41:37.749250 1194386 main.go:141] libmachine: (ha-150891-m02)     <bootmenu enable='no'/>
	I0731 22:41:37.749261 1194386 main.go:141] libmachine: (ha-150891-m02)   </os>
	I0731 22:41:37.749273 1194386 main.go:141] libmachine: (ha-150891-m02)   <devices>
	I0731 22:41:37.749285 1194386 main.go:141] libmachine: (ha-150891-m02)     <disk type='file' device='cdrom'>
	I0731 22:41:37.749309 1194386 main.go:141] libmachine: (ha-150891-m02)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/boot2docker.iso'/>
	I0731 22:41:37.749324 1194386 main.go:141] libmachine: (ha-150891-m02)       <target dev='hdc' bus='scsi'/>
	I0731 22:41:37.749335 1194386 main.go:141] libmachine: (ha-150891-m02)       <readonly/>
	I0731 22:41:37.749343 1194386 main.go:141] libmachine: (ha-150891-m02)     </disk>
	I0731 22:41:37.749357 1194386 main.go:141] libmachine: (ha-150891-m02)     <disk type='file' device='disk'>
	I0731 22:41:37.749371 1194386 main.go:141] libmachine: (ha-150891-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 22:41:37.749388 1194386 main.go:141] libmachine: (ha-150891-m02)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/ha-150891-m02.rawdisk'/>
	I0731 22:41:37.749404 1194386 main.go:141] libmachine: (ha-150891-m02)       <target dev='hda' bus='virtio'/>
	I0731 22:41:37.749415 1194386 main.go:141] libmachine: (ha-150891-m02)     </disk>
	I0731 22:41:37.749428 1194386 main.go:141] libmachine: (ha-150891-m02)     <interface type='network'>
	I0731 22:41:37.749441 1194386 main.go:141] libmachine: (ha-150891-m02)       <source network='mk-ha-150891'/>
	I0731 22:41:37.749452 1194386 main.go:141] libmachine: (ha-150891-m02)       <model type='virtio'/>
	I0731 22:41:37.749462 1194386 main.go:141] libmachine: (ha-150891-m02)     </interface>
	I0731 22:41:37.749477 1194386 main.go:141] libmachine: (ha-150891-m02)     <interface type='network'>
	I0731 22:41:37.749490 1194386 main.go:141] libmachine: (ha-150891-m02)       <source network='default'/>
	I0731 22:41:37.749511 1194386 main.go:141] libmachine: (ha-150891-m02)       <model type='virtio'/>
	I0731 22:41:37.749523 1194386 main.go:141] libmachine: (ha-150891-m02)     </interface>
	I0731 22:41:37.749534 1194386 main.go:141] libmachine: (ha-150891-m02)     <serial type='pty'>
	I0731 22:41:37.749554 1194386 main.go:141] libmachine: (ha-150891-m02)       <target port='0'/>
	I0731 22:41:37.749572 1194386 main.go:141] libmachine: (ha-150891-m02)     </serial>
	I0731 22:41:37.749585 1194386 main.go:141] libmachine: (ha-150891-m02)     <console type='pty'>
	I0731 22:41:37.749599 1194386 main.go:141] libmachine: (ha-150891-m02)       <target type='serial' port='0'/>
	I0731 22:41:37.749611 1194386 main.go:141] libmachine: (ha-150891-m02)     </console>
	I0731 22:41:37.749621 1194386 main.go:141] libmachine: (ha-150891-m02)     <rng model='virtio'>
	I0731 22:41:37.749631 1194386 main.go:141] libmachine: (ha-150891-m02)       <backend model='random'>/dev/random</backend>
	I0731 22:41:37.749637 1194386 main.go:141] libmachine: (ha-150891-m02)     </rng>
	I0731 22:41:37.749642 1194386 main.go:141] libmachine: (ha-150891-m02)     
	I0731 22:41:37.749649 1194386 main.go:141] libmachine: (ha-150891-m02)     
	I0731 22:41:37.749654 1194386 main.go:141] libmachine: (ha-150891-m02)   </devices>
	I0731 22:41:37.749664 1194386 main.go:141] libmachine: (ha-150891-m02) </domain>
	I0731 22:41:37.749674 1194386 main.go:141] libmachine: (ha-150891-m02) 
	I0731 22:41:37.756306 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:f0:b6:68 in network default
	I0731 22:41:37.756846 1194386 main.go:141] libmachine: (ha-150891-m02) Ensuring networks are active...
	I0731 22:41:37.756870 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:37.757613 1194386 main.go:141] libmachine: (ha-150891-m02) Ensuring network default is active
	I0731 22:41:37.757887 1194386 main.go:141] libmachine: (ha-150891-m02) Ensuring network mk-ha-150891 is active
	I0731 22:41:37.758199 1194386 main.go:141] libmachine: (ha-150891-m02) Getting domain xml...
	I0731 22:41:37.758754 1194386 main.go:141] libmachine: (ha-150891-m02) Creating domain...
	I0731 22:41:39.003049 1194386 main.go:141] libmachine: (ha-150891-m02) Waiting to get IP...
	I0731 22:41:39.003772 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.004166 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.004243 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.004164 1194775 retry.go:31] will retry after 204.235682ms: waiting for machine to come up
	I0731 22:41:39.209779 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.210251 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.210275 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.210205 1194775 retry.go:31] will retry after 356.106914ms: waiting for machine to come up
	I0731 22:41:39.568003 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.568563 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.568595 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.568507 1194775 retry.go:31] will retry after 368.623567ms: waiting for machine to come up
	I0731 22:41:39.939393 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:39.939920 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:39.939948 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:39.939881 1194775 retry.go:31] will retry after 506.801083ms: waiting for machine to come up
	I0731 22:41:40.448839 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:40.449376 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:40.449407 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:40.449339 1194775 retry.go:31] will retry after 477.617493ms: waiting for machine to come up
	I0731 22:41:40.928985 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:40.929381 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:40.929405 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:40.929331 1194775 retry.go:31] will retry after 831.102078ms: waiting for machine to come up
	I0731 22:41:41.762028 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:41.762523 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:41.762547 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:41.762481 1194775 retry.go:31] will retry after 1.114057632s: waiting for machine to come up
	I0731 22:41:42.878288 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:42.878818 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:42.878873 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:42.878793 1194775 retry.go:31] will retry after 903.129066ms: waiting for machine to come up
	I0731 22:41:43.783929 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:43.784448 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:43.784485 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:43.784394 1194775 retry.go:31] will retry after 1.316496541s: waiting for machine to come up
	I0731 22:41:45.102179 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:45.102732 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:45.102762 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:45.102661 1194775 retry.go:31] will retry after 1.883859618s: waiting for machine to come up
	I0731 22:41:46.988949 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:46.989490 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:46.989518 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:46.989440 1194775 retry.go:31] will retry after 2.374845063s: waiting for machine to come up
	I0731 22:41:49.367716 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:49.368167 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:49.368198 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:49.368118 1194775 retry.go:31] will retry after 2.338221125s: waiting for machine to come up
	I0731 22:41:51.710267 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:51.710794 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:51.710831 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:51.710726 1194775 retry.go:31] will retry after 4.46190766s: waiting for machine to come up
	I0731 22:41:56.173775 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:41:56.174219 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find current IP address of domain ha-150891-m02 in network mk-ha-150891
	I0731 22:41:56.174238 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | I0731 22:41:56.174193 1194775 retry.go:31] will retry after 5.387637544s: waiting for machine to come up
	I0731 22:42:01.566356 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.566905 1194386 main.go:141] libmachine: (ha-150891-m02) Found IP for machine: 192.168.39.224
	I0731 22:42:01.566930 1194386 main.go:141] libmachine: (ha-150891-m02) Reserving static IP address...
	I0731 22:42:01.566944 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has current primary IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.567290 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | unable to find host DHCP lease matching {name: "ha-150891-m02", mac: "52:54:00:60:a1:dd", ip: "192.168.39.224"} in network mk-ha-150891
	I0731 22:42:01.650767 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Getting to WaitForSSH function...
	I0731 22:42:01.650796 1194386 main.go:141] libmachine: (ha-150891-m02) Reserved static IP address: 192.168.39.224
	I0731 22:42:01.650808 1194386 main.go:141] libmachine: (ha-150891-m02) Waiting for SSH to be available...
	I0731 22:42:01.653594 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.654012 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:minikube Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:01.654034 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.654205 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Using SSH client type: external
	I0731 22:42:01.654226 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa (-rw-------)
	I0731 22:42:01.654256 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 22:42:01.654273 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | About to run SSH command:
	I0731 22:42:01.654286 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | exit 0
	I0731 22:42:01.780157 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | SSH cmd err, output: <nil>: 
	I0731 22:42:01.780391 1194386 main.go:141] libmachine: (ha-150891-m02) KVM machine creation complete!
	I0731 22:42:01.780772 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetConfigRaw
	I0731 22:42:01.781317 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:01.781493 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:01.781666 1194386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 22:42:01.781681 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:42:01.782914 1194386 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 22:42:01.782931 1194386 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 22:42:01.782937 1194386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 22:42:01.782944 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:01.785402 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.785794 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:01.785833 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.785942 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:01.786151 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.786335 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.786479 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:01.786656 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:01.786873 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:01.786885 1194386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 22:42:01.891508 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:42:01.891533 1194386 main.go:141] libmachine: Detecting the provisioner...
	I0731 22:42:01.891542 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:01.894486 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.894898 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:01.894927 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:01.895131 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:01.895399 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.895609 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:01.895789 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:01.895975 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:01.896175 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:01.896190 1194386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 22:42:02.000665 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 22:42:02.000750 1194386 main.go:141] libmachine: found compatible host: buildroot
	I0731 22:42:02.000757 1194386 main.go:141] libmachine: Provisioning with buildroot...
	I0731 22:42:02.000765 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:42:02.001051 1194386 buildroot.go:166] provisioning hostname "ha-150891-m02"
	I0731 22:42:02.001086 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:42:02.001291 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.003876 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.004193 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.004219 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.004367 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.004564 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.004735 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.004851 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.005012 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.005247 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.005266 1194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891-m02 && echo "ha-150891-m02" | sudo tee /etc/hostname
	I0731 22:42:02.121702 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891-m02
	
	I0731 22:42:02.121733 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.124572 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.124994 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.125025 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.125222 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.125470 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.125671 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.125852 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.126053 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.126266 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.126284 1194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:42:02.236267 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:42:02.236298 1194386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:42:02.236316 1194386 buildroot.go:174] setting up certificates
	I0731 22:42:02.236328 1194386 provision.go:84] configureAuth start
	I0731 22:42:02.236337 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetMachineName
	I0731 22:42:02.236654 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:02.239306 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.239684 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.239717 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.239851 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.242139 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.242501 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.242526 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.242723 1194386 provision.go:143] copyHostCerts
	I0731 22:42:02.242769 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:42:02.242812 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:42:02.242824 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:42:02.242908 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:42:02.243007 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:42:02.243033 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:42:02.243043 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:42:02.243087 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:42:02.243150 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:42:02.243175 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:42:02.243184 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:42:02.243220 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:42:02.243309 1194386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891-m02 san=[127.0.0.1 192.168.39.224 ha-150891-m02 localhost minikube]
	I0731 22:42:02.346530 1194386 provision.go:177] copyRemoteCerts
	I0731 22:42:02.346589 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:42:02.346616 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.349524 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.349838 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.349867 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.350116 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.350374 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.350565 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.350712 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:02.431711 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:42:02.431817 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:42:02.455084 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:42:02.455172 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:42:02.478135 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:42:02.478228 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:42:02.501270 1194386 provision.go:87] duration metric: took 264.925805ms to configureAuth
	I0731 22:42:02.501302 1194386 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:42:02.501475 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:02.501561 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.504052 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.504390 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.504418 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.504570 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.504764 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.504908 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.505044 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.505280 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.505451 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.505476 1194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:42:02.765035 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:42:02.765067 1194386 main.go:141] libmachine: Checking connection to Docker...
	I0731 22:42:02.765078 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetURL
	I0731 22:42:02.766389 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | Using libvirt version 6000000
	I0731 22:42:02.768395 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.768756 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.768784 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.768967 1194386 main.go:141] libmachine: Docker is up and running!
	I0731 22:42:02.768982 1194386 main.go:141] libmachine: Reticulating splines...
	I0731 22:42:02.768989 1194386 client.go:171] duration metric: took 25.485560762s to LocalClient.Create
	I0731 22:42:02.769012 1194386 start.go:167] duration metric: took 25.485625209s to libmachine.API.Create "ha-150891"
	I0731 22:42:02.769022 1194386 start.go:293] postStartSetup for "ha-150891-m02" (driver="kvm2")
	I0731 22:42:02.769032 1194386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:42:02.769051 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:02.769330 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:42:02.769363 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.771534 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.771903 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.771935 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.772118 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.772330 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.772507 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.772679 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:02.854792 1194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:42:02.859040 1194386 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:42:02.859077 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:42:02.859163 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:42:02.859262 1194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:42:02.859275 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:42:02.859388 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:42:02.869291 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:42:02.892937 1194386 start.go:296] duration metric: took 123.899794ms for postStartSetup
	I0731 22:42:02.892999 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetConfigRaw
	I0731 22:42:02.893710 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:02.896566 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.896951 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.896986 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.897226 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:42:02.897445 1194386 start.go:128] duration metric: took 25.633530271s to createHost
	I0731 22:42:02.897475 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:02.899680 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.900057 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:02.900108 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:02.900233 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:02.900428 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.900631 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:02.900779 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:02.900994 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:42:02.901162 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I0731 22:42:02.901172 1194386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:42:03.004868 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465722.982647454
	
	I0731 22:42:03.004901 1194386 fix.go:216] guest clock: 1722465722.982647454
	I0731 22:42:03.004910 1194386 fix.go:229] Guest: 2024-07-31 22:42:02.982647454 +0000 UTC Remote: 2024-07-31 22:42:02.897460391 +0000 UTC m=+82.432245142 (delta=85.187063ms)
	I0731 22:42:03.004929 1194386 fix.go:200] guest clock delta is within tolerance: 85.187063ms
	I0731 22:42:03.004934 1194386 start.go:83] releasing machines lock for "ha-150891-m02", held for 25.741133334s
	I0731 22:42:03.004955 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.005260 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:03.008030 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.008361 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:03.008391 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.011002 1194386 out.go:177] * Found network options:
	I0731 22:42:03.012400 1194386 out.go:177]   - NO_PROXY=192.168.39.105
	W0731 22:42:03.013513 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:42:03.013571 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.014240 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.014443 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:42:03.014555 1194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:42:03.014611 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	W0731 22:42:03.014714 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:42:03.014790 1194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:42:03.014814 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:42:03.017516 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.017542 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.017869 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:03.017897 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.017922 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:03.017935 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:03.018043 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:03.018143 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:42:03.018279 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:03.018358 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:42:03.018435 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:03.018520 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:42:03.018589 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:03.018643 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:42:03.246365 1194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:42:03.252053 1194386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:42:03.252152 1194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:42:03.268896 1194386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:42:03.268929 1194386 start.go:495] detecting cgroup driver to use...
	I0731 22:42:03.269022 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:42:03.284943 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:42:03.299484 1194386 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:42:03.299546 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:42:03.313401 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:42:03.327404 1194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:42:03.447515 1194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:42:03.594212 1194386 docker.go:233] disabling docker service ...
	I0731 22:42:03.594293 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:42:03.608736 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:42:03.621935 1194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:42:03.755744 1194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:42:03.864008 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:42:03.876911 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:42:03.894800 1194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:42:03.894864 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.905401 1194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:42:03.905490 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.915927 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.926411 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.936885 1194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:42:03.947334 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.957785 1194386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.974821 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:42:03.984854 1194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:42:03.994141 1194386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 22:42:03.994210 1194386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 22:42:04.009379 1194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:42:04.019163 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:42:04.135711 1194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:42:04.270607 1194386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:42:04.270689 1194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:42:04.276136 1194386 start.go:563] Will wait 60s for crictl version
	I0731 22:42:04.276200 1194386 ssh_runner.go:195] Run: which crictl
	I0731 22:42:04.279737 1194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:42:04.320910 1194386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:42:04.321025 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:42:04.349689 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:42:04.381472 1194386 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:42:04.382837 1194386 out.go:177]   - env NO_PROXY=192.168.39.105
	I0731 22:42:04.384018 1194386 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:42:04.386994 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:04.387410 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:41:51 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:42:04.387440 1194386 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:42:04.387682 1194386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:42:04.391813 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:42:04.406019 1194386 mustload.go:65] Loading cluster: ha-150891
	I0731 22:42:04.406249 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:04.406532 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:04.406568 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:04.422418 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0731 22:42:04.422891 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:04.423334 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:04.423357 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:04.423682 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:04.423895 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:42:04.425517 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:42:04.425820 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:04.425849 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:04.442819 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0731 22:42:04.443314 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:04.443827 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:04.443857 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:04.444275 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:04.444530 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:42:04.444699 1194386 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.224
	I0731 22:42:04.444713 1194386 certs.go:194] generating shared ca certs ...
	I0731 22:42:04.444735 1194386 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:42:04.444901 1194386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:42:04.444953 1194386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:42:04.444966 1194386 certs.go:256] generating profile certs ...
	I0731 22:42:04.445066 1194386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:42:04.445100 1194386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574
	I0731 22:42:04.445120 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.224 192.168.39.254]
	I0731 22:42:04.566994 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574 ...
	I0731 22:42:04.567034 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574: {Name:mk440b38c075a0d1eded7b1aea3015c7a2eb447d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:42:04.567215 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574 ...
	I0731 22:42:04.567230 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574: {Name:mk538452a64b13906f2016b6f80157ab13990994 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:42:04.567331 1194386 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.f8271574 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:42:04.567522 1194386 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.f8271574 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:42:04.567731 1194386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:42:04.567755 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:42:04.567773 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:42:04.567791 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:42:04.567809 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:42:04.567825 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:42:04.567839 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:42:04.567852 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:42:04.567870 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:42:04.567934 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:42:04.567983 1194386 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:42:04.568005 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:42:04.568039 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:42:04.568076 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:42:04.568130 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:42:04.568195 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:42:04.568244 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:04.568267 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:42:04.568285 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:42:04.568333 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:42:04.571657 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:04.572134 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:42:04.572166 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:04.572399 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:42:04.572650 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:42:04.572864 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:42:04.573040 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:42:04.648556 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:42:04.653487 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:42:04.664738 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:42:04.668859 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 22:42:04.679711 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:42:04.684221 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:42:04.694519 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:42:04.698803 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 22:42:04.709664 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:42:04.713661 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:42:04.723978 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:42:04.727811 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:42:04.738933 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:42:04.763882 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:42:04.788528 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:42:04.811934 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:42:04.834980 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 22:42:04.857764 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:42:04.880689 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:42:04.903429 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:42:04.926716 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:42:04.949405 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:42:04.972434 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:42:04.995700 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:42:05.013939 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 22:42:05.031874 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:42:05.048327 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 22:42:05.066409 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:42:05.084380 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:42:05.101397 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:42:05.117920 1194386 ssh_runner.go:195] Run: openssl version
	I0731 22:42:05.123328 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:42:05.134172 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:42:05.138516 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:42:05.138597 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:42:05.144305 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:42:05.155324 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:42:05.166221 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:42:05.170475 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:42:05.170539 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:42:05.176184 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:42:05.187088 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:42:05.198203 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:05.202718 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:05.202780 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:42:05.208308 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:42:05.226938 1194386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:42:05.231471 1194386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:42:05.231536 1194386 kubeadm.go:934] updating node {m02 192.168.39.224 8443 v1.30.3 crio true true} ...
	I0731 22:42:05.231698 1194386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:42:05.231740 1194386 kube-vip.go:115] generating kube-vip config ...
	I0731 22:42:05.231795 1194386 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:42:05.246943 1194386 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:42:05.247027 1194386 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:42:05.247083 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:42:05.256652 1194386 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:42:05.256734 1194386 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:42:05.266557 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 22:42:05.266586 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:42:05.266584 1194386 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0731 22:42:05.266662 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:42:05.266589 1194386 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0731 22:42:05.270908 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:42:05.270950 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:42:08.567796 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:42:08.567914 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:42:08.572759 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:42:08.572795 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:42:09.773119 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:42:09.787520 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:42:09.787632 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:42:09.792189 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:42:09.792247 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:42:10.200907 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:42:10.210379 1194386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:42:10.227553 1194386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:42:10.244445 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 22:42:10.260996 1194386 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:42:10.265160 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:42:10.277149 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:42:10.390340 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:42:10.406449 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:42:10.406959 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:10.407022 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:10.422762 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42717
	I0731 22:42:10.423264 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:10.423777 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:10.423801 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:10.424216 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:10.424463 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:42:10.424666 1194386 start.go:317] joinCluster: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:42:10.424772 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:42:10.424789 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:42:10.427571 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:10.428041 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:42:10.428078 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:42:10.428248 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:42:10.428481 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:42:10.428657 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:42:10.428809 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:42:10.578319 1194386 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:42:10.578380 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x5cnck.ovzvspqpct86akxh --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443"
	I0731 22:42:32.726020 1194386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x5cnck.ovzvspqpct86akxh --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443": (22.147610246s)
	I0731 22:42:32.726067 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:42:33.258410 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-150891-m02 minikube.k8s.io/updated_at=2024_07_31T22_42_33_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-150891 minikube.k8s.io/primary=false
	I0731 22:42:33.413377 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-150891-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:42:33.521866 1194386 start.go:319] duration metric: took 23.097192701s to joinCluster
	I0731 22:42:33.521958 1194386 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:42:33.522369 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:33.523496 1194386 out.go:177] * Verifying Kubernetes components...
	I0731 22:42:33.524799 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:42:33.754309 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:42:33.774742 1194386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:42:33.775039 1194386 kapi.go:59] client config for ha-150891: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:42:33.775125 1194386 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.105:8443
	I0731 22:42:33.775384 1194386 node_ready.go:35] waiting up to 6m0s for node "ha-150891-m02" to be "Ready" ...
	I0731 22:42:33.775519 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:33.775532 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:33.775544 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:33.775552 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:33.796760 1194386 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0731 22:42:34.275675 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:34.275712 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:34.275724 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:34.275729 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:34.280296 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:42:34.776268 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:34.776296 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:34.776305 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:34.776308 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:34.779673 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:35.276623 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:35.276655 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:35.276663 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:35.276666 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:35.280144 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:35.776532 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:35.776560 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:35.776573 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:35.776578 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:35.779865 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:35.780741 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:36.276038 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:36.276065 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:36.276074 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:36.276080 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:36.279619 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:36.776522 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:36.776555 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:36.776566 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:36.776572 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:36.780073 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:37.275933 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:37.275963 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:37.275971 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:37.275976 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:37.279387 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:37.775927 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:37.775951 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:37.775962 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:37.775968 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:37.779256 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:38.276341 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:38.276366 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:38.276375 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:38.276380 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:38.279916 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:38.280578 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:38.776501 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:38.776526 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:38.776535 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:38.776539 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:38.779625 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:39.276621 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:39.276647 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:39.276658 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:39.276663 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:39.280311 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:39.776612 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:39.776636 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:39.776644 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:39.776648 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:39.779992 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:40.276042 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:40.276071 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:40.276079 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:40.276083 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:40.279206 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:40.775762 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:40.775791 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:40.775799 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:40.775804 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:40.778868 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:40.779411 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:41.275678 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:41.275710 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:41.275723 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:41.275729 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:41.279076 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:41.775913 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:41.775942 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:41.775954 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:41.775961 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:41.779277 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:42.276066 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:42.276113 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:42.276124 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:42.276130 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:42.279694 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:42.776191 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:42.776225 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:42.776236 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:42.776240 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:42.779748 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:42.780220 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:43.276407 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:43.276436 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:43.276449 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:43.276454 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:43.280052 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:43.776052 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:43.776078 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:43.776096 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:43.776101 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:43.779136 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:44.276115 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:44.276144 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:44.276153 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:44.276158 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:44.279340 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:44.776148 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:44.776174 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:44.776183 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:44.776189 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:44.779238 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:45.276307 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:45.276335 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:45.276343 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:45.276347 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:45.279538 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:45.280200 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:45.775892 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:45.775920 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:45.775928 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:45.775931 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:45.778889 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:46.275874 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:46.275901 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:46.275909 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:46.275912 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:46.279335 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:46.775583 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:46.775610 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:46.775619 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:46.775623 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:46.778852 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:47.276650 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:47.276675 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:47.276690 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:47.276694 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:47.281842 1194386 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:42:47.282342 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:47.776361 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:47.776390 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:47.776401 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:47.776405 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:47.779395 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:48.276454 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:48.276492 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:48.276506 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:48.276515 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:48.279677 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:48.776600 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:48.776631 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:48.776640 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:48.776644 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:48.780123 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:49.275930 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:49.275955 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:49.275964 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:49.275968 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:49.279328 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:49.775711 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:49.775743 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:49.775753 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:49.775758 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:49.778833 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:49.779323 1194386 node_ready.go:53] node "ha-150891-m02" has status "Ready":"False"
	I0731 22:42:50.276467 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:50.276496 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.276505 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.276510 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.281180 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:42:50.281626 1194386 node_ready.go:49] node "ha-150891-m02" has status "Ready":"True"
	I0731 22:42:50.281647 1194386 node_ready.go:38] duration metric: took 16.506246165s for node "ha-150891-m02" to be "Ready" ...
	I0731 22:42:50.281657 1194386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:42:50.281758 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:50.281768 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.281776 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.281779 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.288346 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:42:50.295951 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.296054 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4928n
	I0731 22:42:50.296063 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.296071 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.296075 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.299556 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.300377 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:50.300397 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.300406 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.300413 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.304009 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.304562 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.304585 1194386 pod_ready.go:81] duration metric: took 8.598705ms for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.304599 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.304676 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rkd4j
	I0731 22:42:50.304687 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.304698 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.304704 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.308419 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.309357 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:50.309373 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.309380 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.309387 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.313309 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.313850 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.313869 1194386 pod_ready.go:81] duration metric: took 9.262271ms for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.313879 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.313942 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891
	I0731 22:42:50.313949 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.313956 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.313965 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.317276 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.317844 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:50.317859 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.317867 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.317871 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.321050 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.321564 1194386 pod_ready.go:92] pod "etcd-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.321590 1194386 pod_ready.go:81] duration metric: took 7.70537ms for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.321601 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.321664 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m02
	I0731 22:42:50.321671 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.321679 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.321687 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.324603 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:50.325225 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:50.325239 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.325246 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.325255 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.328213 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:50.821988 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m02
	I0731 22:42:50.822013 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.822020 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.822024 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.825560 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:50.826161 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:50.826177 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.826186 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.826190 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.829105 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:50.829539 1194386 pod_ready.go:92] pod "etcd-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:50.829558 1194386 pod_ready.go:81] duration metric: took 507.948191ms for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.829580 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:50.876995 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891
	I0731 22:42:50.877023 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:50.877035 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:50.877042 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:50.880746 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.076681 1194386 request.go:629] Waited for 195.317866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.076815 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.076843 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.076854 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.076859 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.080647 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.081184 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:51.081207 1194386 pod_ready.go:81] duration metric: took 251.615168ms for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.081218 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.276660 1194386 request.go:629] Waited for 195.356743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:42:51.276726 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:42:51.276733 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.276742 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.276750 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.280464 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.477266 1194386 request.go:629] Waited for 196.12777ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:51.477356 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:51.477361 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.477369 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.477376 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.480688 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.481224 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:51.481246 1194386 pod_ready.go:81] duration metric: took 400.020752ms for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.481262 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.677268 1194386 request.go:629] Waited for 195.916954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:42:51.677346 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:42:51.677354 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.677367 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.677378 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.680623 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.876558 1194386 request.go:629] Waited for 195.306596ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.876630 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:51.876636 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:51.876644 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:51.876648 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:51.879814 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:51.880342 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:51.880364 1194386 pod_ready.go:81] duration metric: took 399.094253ms for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:51.880374 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.077442 1194386 request.go:629] Waited for 196.991894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:42:52.077546 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:42:52.077557 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.077566 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.077571 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.081112 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.277324 1194386 request.go:629] Waited for 195.412254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:52.277400 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:52.277405 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.277413 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.277421 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.280633 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.281150 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:52.281172 1194386 pod_ready.go:81] duration metric: took 400.792125ms for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.281186 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.476601 1194386 request.go:629] Waited for 195.339584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:42:52.476671 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:42:52.476676 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.476684 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.476688 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.480373 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.676485 1194386 request.go:629] Waited for 195.265215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:52.676577 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:52.676583 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.676592 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.676598 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.679895 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:52.680470 1194386 pod_ready.go:92] pod "kube-proxy-9xcss" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:52.680498 1194386 pod_ready.go:81] duration metric: took 399.303657ms for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.680509 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:52.877545 1194386 request.go:629] Waited for 196.954806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:42:52.877638 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:42:52.877644 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:52.877652 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:52.877658 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:52.880856 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.076949 1194386 request.go:629] Waited for 195.422276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.077046 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.077051 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.077060 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.077069 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.080155 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.080589 1194386 pod_ready.go:92] pod "kube-proxy-nmkp9" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:53.080608 1194386 pod_ready.go:81] duration metric: took 400.092371ms for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.080618 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.276840 1194386 request.go:629] Waited for 196.118028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:42:53.276913 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:42:53.276918 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.276927 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.276932 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.280453 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.477535 1194386 request.go:629] Waited for 196.281182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:53.477639 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:42:53.477652 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.477663 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.477672 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.480684 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:42:53.481253 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:53.481282 1194386 pod_ready.go:81] duration metric: took 400.655466ms for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.481297 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.677319 1194386 request.go:629] Waited for 195.9186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:42:53.677387 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:42:53.677393 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.677401 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.677408 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.680839 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.876870 1194386 request.go:629] Waited for 195.375145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.876947 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:42:53.876952 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.876961 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.876965 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.880151 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:53.880910 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:42:53.880938 1194386 pod_ready.go:81] duration metric: took 399.629245ms for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:42:53.880953 1194386 pod_ready.go:38] duration metric: took 3.599257708s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:42:53.880977 1194386 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:42:53.881057 1194386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:42:53.895803 1194386 api_server.go:72] duration metric: took 20.373791047s to wait for apiserver process to appear ...
	I0731 22:42:53.895843 1194386 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:42:53.895873 1194386 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0731 22:42:53.903218 1194386 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0731 22:42:53.903305 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/version
	I0731 22:42:53.903314 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:53.903322 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:53.903330 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:53.904681 1194386 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0731 22:42:53.904825 1194386 api_server.go:141] control plane version: v1.30.3
	I0731 22:42:53.904851 1194386 api_server.go:131] duration metric: took 8.998033ms to wait for apiserver health ...
	I0731 22:42:53.904863 1194386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:42:54.077394 1194386 request.go:629] Waited for 172.399936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.077460 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.077465 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.077480 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.077485 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.082964 1194386 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0731 22:42:54.089092 1194386 system_pods.go:59] 17 kube-system pods found
	I0731 22:42:54.089130 1194386 system_pods.go:61] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:42:54.089136 1194386 system_pods.go:61] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:42:54.089140 1194386 system_pods.go:61] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:42:54.089143 1194386 system_pods.go:61] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:42:54.089146 1194386 system_pods.go:61] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:42:54.089149 1194386 system_pods.go:61] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:42:54.089152 1194386 system_pods.go:61] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:42:54.089154 1194386 system_pods.go:61] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:42:54.089157 1194386 system_pods.go:61] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:42:54.089160 1194386 system_pods.go:61] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:42:54.089163 1194386 system_pods.go:61] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:42:54.089166 1194386 system_pods.go:61] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:42:54.089169 1194386 system_pods.go:61] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:42:54.089171 1194386 system_pods.go:61] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:42:54.089174 1194386 system_pods.go:61] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:42:54.089177 1194386 system_pods.go:61] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:42:54.089180 1194386 system_pods.go:61] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:42:54.089187 1194386 system_pods.go:74] duration metric: took 184.313443ms to wait for pod list to return data ...
	I0731 22:42:54.089198 1194386 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:42:54.276626 1194386 request.go:629] Waited for 187.306183ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:42:54.276715 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:42:54.276727 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.276736 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.276744 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.279860 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:54.280186 1194386 default_sa.go:45] found service account: "default"
	I0731 22:42:54.280209 1194386 default_sa.go:55] duration metric: took 191.004768ms for default service account to be created ...
	I0731 22:42:54.280218 1194386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:42:54.477457 1194386 request.go:629] Waited for 197.165061ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.477540 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:42:54.477547 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.477558 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.477567 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.482433 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:42:54.486954 1194386 system_pods.go:86] 17 kube-system pods found
	I0731 22:42:54.486987 1194386 system_pods.go:89] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:42:54.486992 1194386 system_pods.go:89] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:42:54.486997 1194386 system_pods.go:89] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:42:54.487001 1194386 system_pods.go:89] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:42:54.487005 1194386 system_pods.go:89] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:42:54.487009 1194386 system_pods.go:89] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:42:54.487013 1194386 system_pods.go:89] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:42:54.487017 1194386 system_pods.go:89] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:42:54.487021 1194386 system_pods.go:89] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:42:54.487025 1194386 system_pods.go:89] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:42:54.487030 1194386 system_pods.go:89] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:42:54.487033 1194386 system_pods.go:89] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:42:54.487039 1194386 system_pods.go:89] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:42:54.487045 1194386 system_pods.go:89] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:42:54.487049 1194386 system_pods.go:89] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:42:54.487052 1194386 system_pods.go:89] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:42:54.487056 1194386 system_pods.go:89] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:42:54.487063 1194386 system_pods.go:126] duration metric: took 206.839613ms to wait for k8s-apps to be running ...
	I0731 22:42:54.487073 1194386 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:42:54.487118 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:42:54.504629 1194386 system_svc.go:56] duration metric: took 17.54447ms WaitForService to wait for kubelet
	I0731 22:42:54.504662 1194386 kubeadm.go:582] duration metric: took 20.982660012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:42:54.504685 1194386 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:42:54.677167 1194386 request.go:629] Waited for 172.369878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes
	I0731 22:42:54.677247 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes
	I0731 22:42:54.677256 1194386 round_trippers.go:469] Request Headers:
	I0731 22:42:54.677269 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:42:54.677278 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:42:54.680340 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:42:54.681073 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:42:54.681098 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:42:54.681110 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:42:54.681114 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:42:54.681118 1194386 node_conditions.go:105] duration metric: took 176.428527ms to run NodePressure ...
	I0731 22:42:54.681130 1194386 start.go:241] waiting for startup goroutines ...
	I0731 22:42:54.681156 1194386 start.go:255] writing updated cluster config ...
	I0731 22:42:54.683187 1194386 out.go:177] 
	I0731 22:42:54.684527 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:42:54.684624 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:42:54.686141 1194386 out.go:177] * Starting "ha-150891-m03" control-plane node in "ha-150891" cluster
	I0731 22:42:54.687148 1194386 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:42:54.687180 1194386 cache.go:56] Caching tarball of preloaded images
	I0731 22:42:54.687312 1194386 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:42:54.687324 1194386 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:42:54.687418 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:42:54.687611 1194386 start.go:360] acquireMachinesLock for ha-150891-m03: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:42:54.687678 1194386 start.go:364] duration metric: took 29.245µs to acquireMachinesLock for "ha-150891-m03"
	I0731 22:42:54.687698 1194386 start.go:93] Provisioning new machine with config: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:42:54.687796 1194386 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0731 22:42:54.689140 1194386 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 22:42:54.689312 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:42:54.689350 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:42:54.705381 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45415
	I0731 22:42:54.705867 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:42:54.706370 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:42:54.706394 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:42:54.706726 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:42:54.706922 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:42:54.707047 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:42:54.707211 1194386 start.go:159] libmachine.API.Create for "ha-150891" (driver="kvm2")
	I0731 22:42:54.707245 1194386 client.go:168] LocalClient.Create starting
	I0731 22:42:54.707288 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 22:42:54.707333 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:42:54.707357 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:42:54.707429 1194386 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 22:42:54.707457 1194386 main.go:141] libmachine: Decoding PEM data...
	I0731 22:42:54.707475 1194386 main.go:141] libmachine: Parsing certificate...
	I0731 22:42:54.707496 1194386 main.go:141] libmachine: Running pre-create checks...
	I0731 22:42:54.707509 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .PreCreateCheck
	I0731 22:42:54.707700 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetConfigRaw
	I0731 22:42:54.708192 1194386 main.go:141] libmachine: Creating machine...
	I0731 22:42:54.708210 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .Create
	I0731 22:42:54.708351 1194386 main.go:141] libmachine: (ha-150891-m03) Creating KVM machine...
	I0731 22:42:54.709626 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found existing default KVM network
	I0731 22:42:54.709793 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found existing private KVM network mk-ha-150891
	I0731 22:42:54.709952 1194386 main.go:141] libmachine: (ha-150891-m03) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03 ...
	I0731 22:42:54.709979 1194386 main.go:141] libmachine: (ha-150891-m03) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 22:42:54.710089 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:54.709964 1195189 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:42:54.710164 1194386 main.go:141] libmachine: (ha-150891-m03) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 22:42:54.996918 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:54.996772 1195189 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa...
	I0731 22:42:55.135913 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:55.135778 1195189 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/ha-150891-m03.rawdisk...
	I0731 22:42:55.135944 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Writing magic tar header
	I0731 22:42:55.135954 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Writing SSH key tar header
	I0731 22:42:55.135962 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:55.135923 1195189 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03 ...
	I0731 22:42:55.136120 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03
	I0731 22:42:55.136157 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03 (perms=drwx------)
	I0731 22:42:55.136173 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 22:42:55.136196 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:42:55.136208 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 22:42:55.136224 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 22:42:55.136243 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 22:42:55.136254 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home/jenkins
	I0731 22:42:55.136268 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Checking permissions on dir: /home
	I0731 22:42:55.136279 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Skipping /home - not owner
	I0731 22:42:55.136295 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 22:42:55.136307 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 22:42:55.136316 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 22:42:55.136321 1194386 main.go:141] libmachine: (ha-150891-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 22:42:55.136329 1194386 main.go:141] libmachine: (ha-150891-m03) Creating domain...
	I0731 22:42:55.137274 1194386 main.go:141] libmachine: (ha-150891-m03) define libvirt domain using xml: 
	I0731 22:42:55.137306 1194386 main.go:141] libmachine: (ha-150891-m03) <domain type='kvm'>
	I0731 22:42:55.137350 1194386 main.go:141] libmachine: (ha-150891-m03)   <name>ha-150891-m03</name>
	I0731 22:42:55.137378 1194386 main.go:141] libmachine: (ha-150891-m03)   <memory unit='MiB'>2200</memory>
	I0731 22:42:55.137413 1194386 main.go:141] libmachine: (ha-150891-m03)   <vcpu>2</vcpu>
	I0731 22:42:55.137437 1194386 main.go:141] libmachine: (ha-150891-m03)   <features>
	I0731 22:42:55.137453 1194386 main.go:141] libmachine: (ha-150891-m03)     <acpi/>
	I0731 22:42:55.137460 1194386 main.go:141] libmachine: (ha-150891-m03)     <apic/>
	I0731 22:42:55.137468 1194386 main.go:141] libmachine: (ha-150891-m03)     <pae/>
	I0731 22:42:55.137475 1194386 main.go:141] libmachine: (ha-150891-m03)     
	I0731 22:42:55.137482 1194386 main.go:141] libmachine: (ha-150891-m03)   </features>
	I0731 22:42:55.137491 1194386 main.go:141] libmachine: (ha-150891-m03)   <cpu mode='host-passthrough'>
	I0731 22:42:55.137499 1194386 main.go:141] libmachine: (ha-150891-m03)   
	I0731 22:42:55.137514 1194386 main.go:141] libmachine: (ha-150891-m03)   </cpu>
	I0731 22:42:55.137535 1194386 main.go:141] libmachine: (ha-150891-m03)   <os>
	I0731 22:42:55.137544 1194386 main.go:141] libmachine: (ha-150891-m03)     <type>hvm</type>
	I0731 22:42:55.137554 1194386 main.go:141] libmachine: (ha-150891-m03)     <boot dev='cdrom'/>
	I0731 22:42:55.137562 1194386 main.go:141] libmachine: (ha-150891-m03)     <boot dev='hd'/>
	I0731 22:42:55.137572 1194386 main.go:141] libmachine: (ha-150891-m03)     <bootmenu enable='no'/>
	I0731 22:42:55.137587 1194386 main.go:141] libmachine: (ha-150891-m03)   </os>
	I0731 22:42:55.137599 1194386 main.go:141] libmachine: (ha-150891-m03)   <devices>
	I0731 22:42:55.137610 1194386 main.go:141] libmachine: (ha-150891-m03)     <disk type='file' device='cdrom'>
	I0731 22:42:55.137626 1194386 main.go:141] libmachine: (ha-150891-m03)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/boot2docker.iso'/>
	I0731 22:42:55.137637 1194386 main.go:141] libmachine: (ha-150891-m03)       <target dev='hdc' bus='scsi'/>
	I0731 22:42:55.137656 1194386 main.go:141] libmachine: (ha-150891-m03)       <readonly/>
	I0731 22:42:55.137671 1194386 main.go:141] libmachine: (ha-150891-m03)     </disk>
	I0731 22:42:55.137685 1194386 main.go:141] libmachine: (ha-150891-m03)     <disk type='file' device='disk'>
	I0731 22:42:55.137697 1194386 main.go:141] libmachine: (ha-150891-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 22:42:55.137721 1194386 main.go:141] libmachine: (ha-150891-m03)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/ha-150891-m03.rawdisk'/>
	I0731 22:42:55.137731 1194386 main.go:141] libmachine: (ha-150891-m03)       <target dev='hda' bus='virtio'/>
	I0731 22:42:55.137741 1194386 main.go:141] libmachine: (ha-150891-m03)     </disk>
	I0731 22:42:55.137756 1194386 main.go:141] libmachine: (ha-150891-m03)     <interface type='network'>
	I0731 22:42:55.137769 1194386 main.go:141] libmachine: (ha-150891-m03)       <source network='mk-ha-150891'/>
	I0731 22:42:55.137780 1194386 main.go:141] libmachine: (ha-150891-m03)       <model type='virtio'/>
	I0731 22:42:55.137789 1194386 main.go:141] libmachine: (ha-150891-m03)     </interface>
	I0731 22:42:55.137805 1194386 main.go:141] libmachine: (ha-150891-m03)     <interface type='network'>
	I0731 22:42:55.137816 1194386 main.go:141] libmachine: (ha-150891-m03)       <source network='default'/>
	I0731 22:42:55.137824 1194386 main.go:141] libmachine: (ha-150891-m03)       <model type='virtio'/>
	I0731 22:42:55.137835 1194386 main.go:141] libmachine: (ha-150891-m03)     </interface>
	I0731 22:42:55.137843 1194386 main.go:141] libmachine: (ha-150891-m03)     <serial type='pty'>
	I0731 22:42:55.137854 1194386 main.go:141] libmachine: (ha-150891-m03)       <target port='0'/>
	I0731 22:42:55.137863 1194386 main.go:141] libmachine: (ha-150891-m03)     </serial>
	I0731 22:42:55.137871 1194386 main.go:141] libmachine: (ha-150891-m03)     <console type='pty'>
	I0731 22:42:55.137881 1194386 main.go:141] libmachine: (ha-150891-m03)       <target type='serial' port='0'/>
	I0731 22:42:55.137905 1194386 main.go:141] libmachine: (ha-150891-m03)     </console>
	I0731 22:42:55.137930 1194386 main.go:141] libmachine: (ha-150891-m03)     <rng model='virtio'>
	I0731 22:42:55.137946 1194386 main.go:141] libmachine: (ha-150891-m03)       <backend model='random'>/dev/random</backend>
	I0731 22:42:55.137962 1194386 main.go:141] libmachine: (ha-150891-m03)     </rng>
	I0731 22:42:55.137974 1194386 main.go:141] libmachine: (ha-150891-m03)     
	I0731 22:42:55.137984 1194386 main.go:141] libmachine: (ha-150891-m03)     
	I0731 22:42:55.137995 1194386 main.go:141] libmachine: (ha-150891-m03)   </devices>
	I0731 22:42:55.138005 1194386 main.go:141] libmachine: (ha-150891-m03) </domain>
	I0731 22:42:55.138016 1194386 main.go:141] libmachine: (ha-150891-m03) 
	I0731 22:42:55.145140 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:94:0b:a4 in network default
	I0731 22:42:55.145655 1194386 main.go:141] libmachine: (ha-150891-m03) Ensuring networks are active...
	I0731 22:42:55.145678 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:55.146466 1194386 main.go:141] libmachine: (ha-150891-m03) Ensuring network default is active
	I0731 22:42:55.146839 1194386 main.go:141] libmachine: (ha-150891-m03) Ensuring network mk-ha-150891 is active
	I0731 22:42:55.147165 1194386 main.go:141] libmachine: (ha-150891-m03) Getting domain xml...
	I0731 22:42:55.147949 1194386 main.go:141] libmachine: (ha-150891-m03) Creating domain...
	I0731 22:42:56.412263 1194386 main.go:141] libmachine: (ha-150891-m03) Waiting to get IP...
	I0731 22:42:56.413215 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:56.413614 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:56.413666 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:56.413618 1195189 retry.go:31] will retry after 311.711502ms: waiting for machine to come up
	I0731 22:42:56.727500 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:56.728058 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:56.728083 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:56.728017 1195189 retry.go:31] will retry after 377.689252ms: waiting for machine to come up
	I0731 22:42:57.107777 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:57.108222 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:57.108253 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:57.108160 1195189 retry.go:31] will retry after 361.803769ms: waiting for machine to come up
	I0731 22:42:57.471861 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:57.472344 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:57.472374 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:57.472289 1195189 retry.go:31] will retry after 366.370663ms: waiting for machine to come up
	I0731 22:42:57.839750 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:57.840206 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:57.840239 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:57.840155 1195189 retry.go:31] will retry after 589.677038ms: waiting for machine to come up
	I0731 22:42:58.432138 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:58.432590 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:58.432631 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:58.432495 1195189 retry.go:31] will retry after 639.331637ms: waiting for machine to come up
	I0731 22:42:59.074637 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:42:59.075071 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:42:59.075098 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:42:59.075035 1195189 retry.go:31] will retry after 1.165105041s: waiting for machine to come up
	I0731 22:43:00.241778 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:00.242278 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:00.242314 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:00.242248 1195189 retry.go:31] will retry after 1.417874278s: waiting for machine to come up
	I0731 22:43:01.661880 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:01.662343 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:01.662376 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:01.662294 1195189 retry.go:31] will retry after 1.838176737s: waiting for machine to come up
	I0731 22:43:03.503498 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:03.504051 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:03.504072 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:03.504005 1195189 retry.go:31] will retry after 1.866715326s: waiting for machine to come up
	I0731 22:43:05.371904 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:05.372437 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:05.372465 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:05.372367 1195189 retry.go:31] will retry after 2.815377302s: waiting for machine to come up
	I0731 22:43:08.189148 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:08.189639 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:08.189664 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:08.189609 1195189 retry.go:31] will retry after 3.016103993s: waiting for machine to come up
	I0731 22:43:11.207889 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:11.208362 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:11.208388 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:11.208303 1195189 retry.go:31] will retry after 2.745386751s: waiting for machine to come up
	I0731 22:43:13.955701 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:13.956167 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find current IP address of domain ha-150891-m03 in network mk-ha-150891
	I0731 22:43:13.956194 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | I0731 22:43:13.956116 1195189 retry.go:31] will retry after 3.553091765s: waiting for machine to come up
	I0731 22:43:17.512455 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.512924 1194386 main.go:141] libmachine: (ha-150891-m03) Found IP for machine: 192.168.39.241
	I0731 22:43:17.512950 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has current primary IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.512958 1194386 main.go:141] libmachine: (ha-150891-m03) Reserving static IP address...
	I0731 22:43:17.513491 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | unable to find host DHCP lease matching {name: "ha-150891-m03", mac: "52:54:00:f8:ec:6d", ip: "192.168.39.241"} in network mk-ha-150891
	I0731 22:43:17.598408 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Getting to WaitForSSH function...
	I0731 22:43:17.598436 1194386 main.go:141] libmachine: (ha-150891-m03) Reserved static IP address: 192.168.39.241
	I0731 22:43:17.598449 1194386 main.go:141] libmachine: (ha-150891-m03) Waiting for SSH to be available...
	I0731 22:43:17.601142 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.601539 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.601572 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.601699 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Using SSH client type: external
	I0731 22:43:17.601725 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa (-rw-------)
	I0731 22:43:17.601757 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 22:43:17.601769 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | About to run SSH command:
	I0731 22:43:17.601784 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | exit 0
	I0731 22:43:17.724181 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | SSH cmd err, output: <nil>: 
	I0731 22:43:17.724471 1194386 main.go:141] libmachine: (ha-150891-m03) KVM machine creation complete!
	I0731 22:43:17.724848 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetConfigRaw
	I0731 22:43:17.725444 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:17.725691 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:17.725856 1194386 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 22:43:17.725871 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:43:17.727131 1194386 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 22:43:17.727148 1194386 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 22:43:17.727154 1194386 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 22:43:17.727160 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:17.729961 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.730388 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.730415 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.730567 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:17.730782 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.731011 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.731179 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:17.731365 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:17.731622 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:17.731635 1194386 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 22:43:17.835468 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:43:17.835492 1194386 main.go:141] libmachine: Detecting the provisioner...
	I0731 22:43:17.835513 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:17.838605 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.839065 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.839092 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.839314 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:17.839552 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.839722 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.839912 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:17.840133 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:17.840305 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:17.840317 1194386 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 22:43:17.944814 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 22:43:17.944915 1194386 main.go:141] libmachine: found compatible host: buildroot
	I0731 22:43:17.944929 1194386 main.go:141] libmachine: Provisioning with buildroot...
	I0731 22:43:17.944943 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:43:17.945227 1194386 buildroot.go:166] provisioning hostname "ha-150891-m03"
	I0731 22:43:17.945244 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:43:17.945453 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:17.948348 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.948753 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:17.948787 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:17.948985 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:17.949167 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.949321 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:17.949437 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:17.949660 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:17.949878 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:17.949892 1194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891-m03 && echo "ha-150891-m03" | sudo tee /etc/hostname
	I0731 22:43:18.066362 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891-m03
	
	I0731 22:43:18.066394 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.069257 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.069654 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.069688 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.069904 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.070126 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.070313 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.070438 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.070633 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:18.070846 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:18.070863 1194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:43:18.185515 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:43:18.185558 1194386 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:43:18.185582 1194386 buildroot.go:174] setting up certificates
	I0731 22:43:18.185602 1194386 provision.go:84] configureAuth start
	I0731 22:43:18.185620 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetMachineName
	I0731 22:43:18.185957 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:18.188745 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.189101 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.189126 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.189318 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.191804 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.192159 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.192188 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.192322 1194386 provision.go:143] copyHostCerts
	I0731 22:43:18.192359 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:43:18.192402 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:43:18.192413 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:43:18.192479 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:43:18.192559 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:43:18.192583 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:43:18.192590 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:43:18.192615 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:43:18.192661 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:43:18.192679 1194386 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:43:18.192683 1194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:43:18.192708 1194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:43:18.192755 1194386 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891-m03 san=[127.0.0.1 192.168.39.241 ha-150891-m03 localhost minikube]
	I0731 22:43:18.331536 1194386 provision.go:177] copyRemoteCerts
	I0731 22:43:18.331616 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:43:18.331654 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.334828 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.335247 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.335281 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.335494 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.335721 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.335916 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.336144 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:18.418445 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:43:18.418536 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:43:18.442720 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:43:18.442802 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0731 22:43:18.467289 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:43:18.467385 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 22:43:18.494185 1194386 provision.go:87] duration metric: took 308.563329ms to configureAuth
	I0731 22:43:18.494219 1194386 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:43:18.494487 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:43:18.494604 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.497605 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.497948 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.497970 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.498219 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.498435 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.498614 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.498736 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.498905 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:18.499094 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:18.499114 1194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:43:18.762164 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:43:18.762196 1194386 main.go:141] libmachine: Checking connection to Docker...
	I0731 22:43:18.762204 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetURL
	I0731 22:43:18.763559 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | Using libvirt version 6000000
	I0731 22:43:18.765738 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.766055 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.766090 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.766227 1194386 main.go:141] libmachine: Docker is up and running!
	I0731 22:43:18.766244 1194386 main.go:141] libmachine: Reticulating splines...
	I0731 22:43:18.766251 1194386 client.go:171] duration metric: took 24.058995248s to LocalClient.Create
	I0731 22:43:18.766272 1194386 start.go:167] duration metric: took 24.059065044s to libmachine.API.Create "ha-150891"
	I0731 22:43:18.766282 1194386 start.go:293] postStartSetup for "ha-150891-m03" (driver="kvm2")
	I0731 22:43:18.766294 1194386 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:43:18.766312 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:18.766578 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:43:18.766602 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.768838 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.769209 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.769235 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.769376 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.769567 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.769722 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.769868 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:18.850737 1194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:43:18.855138 1194386 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:43:18.855176 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:43:18.855259 1194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:43:18.855362 1194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:43:18.855375 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:43:18.855486 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:43:18.865095 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:43:18.890674 1194386 start.go:296] duration metric: took 124.375062ms for postStartSetup
	I0731 22:43:18.890749 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetConfigRaw
	I0731 22:43:18.891459 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:18.894646 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.895057 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.895090 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.895394 1194386 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:43:18.895629 1194386 start.go:128] duration metric: took 24.207820708s to createHost
	I0731 22:43:18.895656 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:18.898870 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.899257 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:18.899290 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:18.899499 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:18.899794 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.899971 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:18.900148 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:18.900324 1194386 main.go:141] libmachine: Using SSH client type: native
	I0731 22:43:18.900533 1194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0731 22:43:18.900544 1194386 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:43:19.008729 1194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722465798.983381385
	
	I0731 22:43:19.008761 1194386 fix.go:216] guest clock: 1722465798.983381385
	I0731 22:43:19.008772 1194386 fix.go:229] Guest: 2024-07-31 22:43:18.983381385 +0000 UTC Remote: 2024-07-31 22:43:18.895642 +0000 UTC m=+158.430426748 (delta=87.739385ms)
	I0731 22:43:19.008796 1194386 fix.go:200] guest clock delta is within tolerance: 87.739385ms
	I0731 22:43:19.008811 1194386 start.go:83] releasing machines lock for "ha-150891-m03", held for 24.321114914s
	I0731 22:43:19.008834 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.009144 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:19.011897 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.012288 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:19.012319 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.014695 1194386 out.go:177] * Found network options:
	I0731 22:43:19.016080 1194386 out.go:177]   - NO_PROXY=192.168.39.105,192.168.39.224
	W0731 22:43:19.017320 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:43:19.017344 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:43:19.017364 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.018037 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.018268 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:43:19.018380 1194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:43:19.018425 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	W0731 22:43:19.018460 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	W0731 22:43:19.018498 1194386 proxy.go:119] fail to check proxy env: Error ip not in block
	I0731 22:43:19.018571 1194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:43:19.018595 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:43:19.021532 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.021728 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.022029 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:19.022061 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.022222 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:19.022243 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:19.022290 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:19.022408 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:43:19.022507 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:19.022611 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:43:19.022661 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:19.022761 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:43:19.022837 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:19.022862 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:43:19.252053 1194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:43:19.258228 1194386 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:43:19.258318 1194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:43:19.274777 1194386 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 22:43:19.274807 1194386 start.go:495] detecting cgroup driver to use...
	I0731 22:43:19.274879 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:43:19.291751 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:43:19.307383 1194386 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:43:19.307457 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:43:19.322719 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:43:19.337567 1194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:43:19.457968 1194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:43:19.629095 1194386 docker.go:233] disabling docker service ...
	I0731 22:43:19.629167 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:43:19.647627 1194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:43:19.660580 1194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:43:19.779952 1194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:43:19.892979 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:43:19.908391 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:43:19.926742 1194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:43:19.926806 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.938918 1194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:43:19.938989 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.950401 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.962124 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.972986 1194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:43:19.984219 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:19.995444 1194386 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:20.014921 1194386 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:43:20.026727 1194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:43:20.037116 1194386 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 22:43:20.037185 1194386 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 22:43:20.050003 1194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:43:20.060866 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:43:20.175020 1194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:43:20.309613 1194386 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:43:20.309718 1194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:43:20.314499 1194386 start.go:563] Will wait 60s for crictl version
	I0731 22:43:20.314571 1194386 ssh_runner.go:195] Run: which crictl
	I0731 22:43:20.319563 1194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:43:20.361170 1194386 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:43:20.361273 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:43:20.391549 1194386 ssh_runner.go:195] Run: crio --version
	I0731 22:43:20.422842 1194386 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:43:20.424428 1194386 out.go:177]   - env NO_PROXY=192.168.39.105
	I0731 22:43:20.426139 1194386 out.go:177]   - env NO_PROXY=192.168.39.105,192.168.39.224
	I0731 22:43:20.427240 1194386 main.go:141] libmachine: (ha-150891-m03) Calling .GetIP
	I0731 22:43:20.430108 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:20.430537 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:43:20.430561 1194386 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:43:20.430835 1194386 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:43:20.435079 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:43:20.447675 1194386 mustload.go:65] Loading cluster: ha-150891
	I0731 22:43:20.447955 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:43:20.448323 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:43:20.448374 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:43:20.464739 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0731 22:43:20.465283 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:43:20.465862 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:43:20.465890 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:43:20.466208 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:43:20.466502 1194386 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:43:20.468414 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:43:20.468753 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:43:20.468799 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:43:20.485333 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0731 22:43:20.485778 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:43:20.486311 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:43:20.486338 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:43:20.486680 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:43:20.486882 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:43:20.487060 1194386 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.241
	I0731 22:43:20.487070 1194386 certs.go:194] generating shared ca certs ...
	I0731 22:43:20.487086 1194386 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:43:20.487226 1194386 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:43:20.487292 1194386 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:43:20.487306 1194386 certs.go:256] generating profile certs ...
	I0731 22:43:20.487389 1194386 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:43:20.487425 1194386 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe
	I0731 22:43:20.487451 1194386 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.224 192.168.39.241 192.168.39.254]
	I0731 22:43:20.555181 1194386 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe ...
	I0731 22:43:20.555219 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe: {Name:mkc8b2401f2f9f966b15bd390172fe6b11839037 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:43:20.555423 1194386 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe ...
	I0731 22:43:20.555442 1194386 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe: {Name:mk1efed90e04277ecee2ba1c415a4310493e916e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:43:20.555545 1194386 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.0f836ffe -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:43:20.555702 1194386 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.0f836ffe -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:43:20.555866 1194386 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:43:20.555885 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:43:20.555905 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:43:20.555929 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:43:20.555950 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:43:20.555968 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:43:20.555987 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:43:20.556004 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:43:20.556022 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:43:20.556109 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:43:20.556162 1194386 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:43:20.556176 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:43:20.556211 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:43:20.556244 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:43:20.556278 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:43:20.556331 1194386 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:43:20.556376 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:43:20.556397 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:20.556415 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:43:20.556460 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:43:20.559798 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:20.560204 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:43:20.560220 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:20.560434 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:43:20.560647 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:43:20.560822 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:43:20.560929 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:43:20.636520 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0731 22:43:20.642300 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0731 22:43:20.653070 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0731 22:43:20.657085 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0731 22:43:20.668817 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0731 22:43:20.673108 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0731 22:43:20.683662 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0731 22:43:20.687852 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0731 22:43:20.699764 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0731 22:43:20.704447 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0731 22:43:20.716290 1194386 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0731 22:43:20.720294 1194386 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0731 22:43:20.731101 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:43:20.755522 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:43:20.781270 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:43:20.805155 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:43:20.829180 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0731 22:43:20.852764 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:43:20.877560 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:43:20.902249 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:43:20.926291 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:43:20.949801 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:43:20.974289 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:43:21.000494 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0731 22:43:21.018051 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0731 22:43:21.034844 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0731 22:43:21.052916 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0731 22:43:21.071129 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0731 22:43:21.091925 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0731 22:43:21.108701 1194386 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0731 22:43:21.126834 1194386 ssh_runner.go:195] Run: openssl version
	I0731 22:43:21.132603 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:43:21.144575 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:43:21.149631 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:43:21.149706 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:43:21.155740 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:43:21.167550 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:43:21.178551 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:21.183482 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:21.183582 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:43:21.189616 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:43:21.200674 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:43:21.212546 1194386 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:43:21.217364 1194386 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:43:21.217442 1194386 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:43:21.223342 1194386 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:43:21.234958 1194386 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:43:21.239398 1194386 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 22:43:21.239485 1194386 kubeadm.go:934] updating node {m03 192.168.39.241 8443 v1.30.3 crio true true} ...
	I0731 22:43:21.239601 1194386 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:43:21.239638 1194386 kube-vip.go:115] generating kube-vip config ...
	I0731 22:43:21.239703 1194386 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:43:21.255986 1194386 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:43:21.256074 1194386 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:43:21.256168 1194386 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:43:21.267553 1194386 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0731 22:43:21.267610 1194386 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0731 22:43:21.277968 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0731 22:43:21.278038 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:43:21.277973 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0731 22:43:21.277973 1194386 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0731 22:43:21.278126 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:43:21.278129 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:43:21.278224 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0731 22:43:21.278225 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0731 22:43:21.292542 1194386 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:43:21.292652 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0731 22:43:21.292670 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0731 22:43:21.292693 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0731 22:43:21.292694 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0731 22:43:21.292664 1194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0731 22:43:21.308756 1194386 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0731 22:43:21.308794 1194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0731 22:43:22.264318 1194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0731 22:43:22.274578 1194386 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0731 22:43:22.291970 1194386 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:43:22.309449 1194386 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 22:43:22.327474 1194386 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:43:22.331734 1194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 22:43:22.345065 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:43:22.492236 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:43:22.510006 1194386 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:43:22.510433 1194386 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:43:22.510488 1194386 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:43:22.527382 1194386 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I0731 22:43:22.527849 1194386 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:43:22.528391 1194386 main.go:141] libmachine: Using API Version  1
	I0731 22:43:22.528420 1194386 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:43:22.528828 1194386 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:43:22.529059 1194386 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:43:22.529249 1194386 start.go:317] joinCluster: &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:43:22.529422 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0731 22:43:22.529444 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:43:22.532291 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:22.532844 1194386 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:43:22.532872 1194386 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:43:22.533030 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:43:22.533238 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:43:22.533430 1194386 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:43:22.533609 1194386 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:43:22.695856 1194386 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:43:22.695917 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yn92gg.uccsz8l2wa3z9w2v --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0731 22:43:45.488902 1194386 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yn92gg.uccsz8l2wa3z9w2v --discovery-token-ca-cert-hash sha256:6f76dc449a2aa3c3453b518e32b2d8993298a70e73000b536f12d38e676252ef --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-150891-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (22.792955755s)
	I0731 22:43:45.488953 1194386 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0731 22:43:45.954644 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-150891-m03 minikube.k8s.io/updated_at=2024_07_31T22_43_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c minikube.k8s.io/name=ha-150891 minikube.k8s.io/primary=false
	I0731 22:43:46.072646 1194386 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-150891-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0731 22:43:46.176653 1194386 start.go:319] duration metric: took 23.647404089s to joinCluster
	I0731 22:43:46.176776 1194386 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 22:43:46.177133 1194386 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:43:46.178084 1194386 out.go:177] * Verifying Kubernetes components...
	I0731 22:43:46.179301 1194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:43:46.386899 1194386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:43:46.414585 1194386 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:43:46.414819 1194386 kapi.go:59] client config for ha-150891: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0731 22:43:46.414897 1194386 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.105:8443
	I0731 22:43:46.415131 1194386 node_ready.go:35] waiting up to 6m0s for node "ha-150891-m03" to be "Ready" ...
	I0731 22:43:46.415223 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:46.415231 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:46.415238 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:46.415242 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:46.418595 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:46.915567 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:46.915592 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:46.915601 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:46.915606 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:46.919505 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:47.416036 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:47.416060 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:47.416068 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:47.416081 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:47.419801 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:47.916073 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:47.916120 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:47.916133 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:47.916140 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:47.920309 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:48.416297 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:48.416320 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:48.416329 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:48.416333 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:48.420161 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:48.420770 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:48.915740 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:48.915772 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:48.915785 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:48.915793 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:48.919878 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:49.416195 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:49.416239 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:49.416249 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:49.416252 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:49.420198 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:49.915741 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:49.915775 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:49.915786 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:49.915794 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:49.919488 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:50.415458 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:50.415486 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:50.415494 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:50.415499 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:50.419037 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:50.915410 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:50.915438 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:50.915446 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:50.915451 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:50.919270 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:50.919705 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:51.416222 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:51.416251 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:51.416263 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:51.416268 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:51.420074 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:51.915844 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:51.915876 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:51.915888 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:51.915893 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:51.919367 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:52.415794 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:52.415879 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:52.415906 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:52.415914 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:52.423284 1194386 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0731 22:43:52.916224 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:52.916248 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:52.916258 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:52.916262 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:52.919768 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:52.920304 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:53.415521 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:53.415547 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:53.415556 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:53.415559 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:53.418678 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:53.915435 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:53.915465 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:53.915473 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:53.915478 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:53.918908 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:54.415998 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:54.416024 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:54.416033 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:54.416037 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:54.419295 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:54.915916 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:54.915940 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:54.915949 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:54.915953 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:54.919873 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:54.920481 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:55.415757 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:55.415791 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:55.415801 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:55.415806 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:55.419361 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:55.915668 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:55.915694 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:55.915702 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:55.915706 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:55.919284 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:56.415352 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:56.415381 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:56.415391 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:56.415396 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:56.418994 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:56.915814 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:56.915853 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:56.915865 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:56.915872 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:56.920083 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:56.921114 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:57.416047 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:57.416079 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:57.416111 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:57.416117 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:57.419701 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:57.916292 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:57.916317 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:57.916326 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:57.916330 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:57.919935 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:58.415824 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:58.415852 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:58.415862 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:58.415867 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:58.419822 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:58.916033 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:58.916059 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:58.916067 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:58.916071 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:58.919588 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:43:59.415798 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:59.415830 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:59.415842 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:59.415848 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:59.420196 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:43:59.420792 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:43:59.916347 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:43:59.916372 1194386 round_trippers.go:469] Request Headers:
	I0731 22:43:59.916381 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:43:59.916384 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:43:59.919682 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:00.415444 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:00.415471 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:00.415480 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:00.415483 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:00.418943 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:00.916163 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:00.916190 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:00.916198 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:00.916202 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:00.919264 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:01.416255 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:01.416279 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:01.416288 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:01.416293 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:01.419698 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:01.915625 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:01.915665 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:01.915678 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:01.915685 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:01.919543 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:01.920013 1194386 node_ready.go:53] node "ha-150891-m03" has status "Ready":"False"
	I0731 22:44:02.415872 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:02.415899 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:02.415910 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:02.415915 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:02.419572 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:02.915938 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:02.915962 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:02.915970 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:02.915974 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:02.919715 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.415631 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:03.415659 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.415668 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.415675 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.419144 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.419712 1194386 node_ready.go:49] node "ha-150891-m03" has status "Ready":"True"
	I0731 22:44:03.419733 1194386 node_ready.go:38] duration metric: took 17.004587794s for node "ha-150891-m03" to be "Ready" ...
	I0731 22:44:03.419743 1194386 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:44:03.419830 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:03.419840 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.419847 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.419852 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.426803 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:44:03.434683 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.434816 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-4928n
	I0731 22:44:03.434827 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.434839 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.434849 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.438280 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.439024 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:03.439044 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.439057 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.439064 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.443273 1194386 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0731 22:44:03.443765 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.443784 1194386 pod_ready.go:81] duration metric: took 9.066139ms for pod "coredns-7db6d8ff4d-4928n" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.443795 1194386 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.443877 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-rkd4j
	I0731 22:44:03.443887 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.443895 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.443899 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.446490 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.447585 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:03.447626 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.447638 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.447644 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.450878 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.451331 1194386 pod_ready.go:92] pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.451351 1194386 pod_ready.go:81] duration metric: took 7.548977ms for pod "coredns-7db6d8ff4d-rkd4j" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.451361 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.451415 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891
	I0731 22:44:03.451423 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.451430 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.451433 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.454342 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.454921 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:03.454939 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.454947 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.454952 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.457911 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.458350 1194386 pod_ready.go:92] pod "etcd-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.458374 1194386 pod_ready.go:81] duration metric: took 7.005484ms for pod "etcd-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.458388 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.458462 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m02
	I0731 22:44:03.458472 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.458485 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.458504 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.461397 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.461927 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:03.461943 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.461952 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.461958 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.464805 1194386 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0731 22:44:03.465353 1194386 pod_ready.go:92] pod "etcd-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.465379 1194386 pod_ready.go:81] duration metric: took 6.978907ms for pod "etcd-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.465392 1194386 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.616682 1194386 request.go:629] Waited for 151.195905ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m03
	I0731 22:44:03.616746 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/etcd-ha-150891-m03
	I0731 22:44:03.616753 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.616763 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.616769 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.620625 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.816614 1194386 request.go:629] Waited for 195.444036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:03.816704 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:03.816711 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:03.816721 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:03.816731 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:03.820355 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:03.821203 1194386 pod_ready.go:92] pod "etcd-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:03.821227 1194386 pod_ready.go:81] duration metric: took 355.826856ms for pod "etcd-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:03.821251 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.016613 1194386 request.go:629] Waited for 195.26955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891
	I0731 22:44:04.016711 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891
	I0731 22:44:04.016718 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.016729 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.016738 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.020320 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.216469 1194386 request.go:629] Waited for 195.383981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:04.216577 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:04.216588 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.216602 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.216611 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.219872 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.220488 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:04.220511 1194386 pod_ready.go:81] duration metric: took 399.24917ms for pod "kube-apiserver-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.220522 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.416625 1194386 request.go:629] Waited for 196.005775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:44:04.416691 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m02
	I0731 22:44:04.416697 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.416705 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.416712 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.419947 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.616576 1194386 request.go:629] Waited for 195.788726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:04.616662 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:04.616668 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.616676 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.616684 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.619902 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:04.620491 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:04.620519 1194386 pod_ready.go:81] duration metric: took 399.987689ms for pod "kube-apiserver-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.620534 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:04.815902 1194386 request.go:629] Waited for 195.285802ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m03
	I0731 22:44:04.816002 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-150891-m03
	I0731 22:44:04.816012 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:04.816020 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:04.816026 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:04.819509 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.016619 1194386 request.go:629] Waited for 196.368245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:05.016702 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:05.016714 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.016726 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.016738 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.020239 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.020664 1194386 pod_ready.go:92] pod "kube-apiserver-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:05.020686 1194386 pod_ready.go:81] duration metric: took 400.145516ms for pod "kube-apiserver-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.020696 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.215840 1194386 request.go:629] Waited for 195.070368ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:44:05.215907 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891
	I0731 22:44:05.215913 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.215921 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.215925 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.219501 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.415990 1194386 request.go:629] Waited for 195.397538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:05.416076 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:05.416083 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.416115 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.416121 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.419718 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.420477 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:05.420502 1194386 pod_ready.go:81] duration metric: took 399.798279ms for pod "kube-controller-manager-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.420514 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.616245 1194386 request.go:629] Waited for 195.615583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:44:05.616335 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m02
	I0731 22:44:05.616346 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.616359 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.616366 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.620138 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.816454 1194386 request.go:629] Waited for 195.4864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:05.816551 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:05.816559 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:05.816570 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:05.816581 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:05.819761 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:05.820249 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:05.820268 1194386 pod_ready.go:81] duration metric: took 399.747549ms for pod "kube-controller-manager-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:05.820280 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.016444 1194386 request.go:629] Waited for 196.063578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m03
	I0731 22:44:06.016523 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-150891-m03
	I0731 22:44:06.016529 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.016536 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.016540 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.019960 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.216177 1194386 request.go:629] Waited for 195.238135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:06.216267 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:06.216274 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.216284 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.216292 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.219535 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.220036 1194386 pod_ready.go:92] pod "kube-controller-manager-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:06.220058 1194386 pod_ready.go:81] duration metric: took 399.769239ms for pod "kube-controller-manager-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.220068 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.416428 1194386 request.go:629] Waited for 196.255398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:44:06.416515 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9xcss
	I0731 22:44:06.416523 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.416538 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.416546 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.419896 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.615996 1194386 request.go:629] Waited for 195.374732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:06.616082 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:06.616104 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.616116 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.616123 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.619394 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:06.619930 1194386 pod_ready.go:92] pod "kube-proxy-9xcss" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:06.619953 1194386 pod_ready.go:81] duration metric: took 399.876714ms for pod "kube-proxy-9xcss" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.619963 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-df4cg" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:06.815906 1194386 request.go:629] Waited for 195.838575ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-df4cg
	I0731 22:44:06.815984 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-df4cg
	I0731 22:44:06.815991 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:06.816000 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:06.816005 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:06.819817 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.015762 1194386 request.go:629] Waited for 195.267194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:07.015872 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:07.015880 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.015892 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.015900 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.019123 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.019705 1194386 pod_ready.go:92] pod "kube-proxy-df4cg" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:07.019727 1194386 pod_ready.go:81] duration metric: took 399.756233ms for pod "kube-proxy-df4cg" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.019740 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.215766 1194386 request.go:629] Waited for 195.926306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:44:07.215861 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nmkp9
	I0731 22:44:07.215868 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.215876 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.215883 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.219202 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.416248 1194386 request.go:629] Waited for 196.380568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:07.416317 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:07.416325 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.416335 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.416341 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.419642 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.420146 1194386 pod_ready.go:92] pod "kube-proxy-nmkp9" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:07.420167 1194386 pod_ready.go:81] duration metric: took 400.416252ms for pod "kube-proxy-nmkp9" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.420177 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.615750 1194386 request.go:629] Waited for 195.478503ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:44:07.615834 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891
	I0731 22:44:07.615841 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.615849 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.615854 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.619533 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.815694 1194386 request.go:629] Waited for 195.291759ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:07.815762 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891
	I0731 22:44:07.815767 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:07.815775 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:07.815779 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:07.819412 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:07.820007 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:07.820027 1194386 pod_ready.go:81] duration metric: took 399.844665ms for pod "kube-scheduler-ha-150891" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:07.820037 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.016209 1194386 request.go:629] Waited for 196.070733ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:44:08.016289 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m02
	I0731 22:44:08.016294 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.016304 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.016312 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.019423 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.216329 1194386 request.go:629] Waited for 196.370784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:08.216394 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m02
	I0731 22:44:08.216400 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.216409 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.216414 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.219840 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.220324 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:08.220347 1194386 pod_ready.go:81] duration metric: took 400.303486ms for pod "kube-scheduler-ha-150891-m02" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.220356 1194386 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.416442 1194386 request.go:629] Waited for 195.994731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m03
	I0731 22:44:08.416537 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-150891-m03
	I0731 22:44:08.416543 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.416552 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.416556 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.419743 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.616709 1194386 request.go:629] Waited for 196.377943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:08.616809 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes/ha-150891-m03
	I0731 22:44:08.616819 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.616829 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.616836 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.620591 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:08.621093 1194386 pod_ready.go:92] pod "kube-scheduler-ha-150891-m03" in "kube-system" namespace has status "Ready":"True"
	I0731 22:44:08.621115 1194386 pod_ready.go:81] duration metric: took 400.752015ms for pod "kube-scheduler-ha-150891-m03" in "kube-system" namespace to be "Ready" ...
	I0731 22:44:08.621126 1194386 pod_ready.go:38] duration metric: took 5.201372685s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 22:44:08.621142 1194386 api_server.go:52] waiting for apiserver process to appear ...
	I0731 22:44:08.621199 1194386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:44:08.635926 1194386 api_server.go:72] duration metric: took 22.459091752s to wait for apiserver process to appear ...
	I0731 22:44:08.635955 1194386 api_server.go:88] waiting for apiserver healthz status ...
	I0731 22:44:08.635990 1194386 api_server.go:253] Checking apiserver healthz at https://192.168.39.105:8443/healthz ...
	I0731 22:44:08.642616 1194386 api_server.go:279] https://192.168.39.105:8443/healthz returned 200:
	ok
	I0731 22:44:08.642793 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/version
	I0731 22:44:08.642809 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.642821 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.642832 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.643767 1194386 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0731 22:44:08.643854 1194386 api_server.go:141] control plane version: v1.30.3
	I0731 22:44:08.643874 1194386 api_server.go:131] duration metric: took 7.911396ms to wait for apiserver health ...
	I0731 22:44:08.643888 1194386 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 22:44:08.816342 1194386 request.go:629] Waited for 172.346114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:08.816418 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:08.816430 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:08.816441 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:08.816450 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:08.822997 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:44:08.830767 1194386 system_pods.go:59] 24 kube-system pods found
	I0731 22:44:08.830803 1194386 system_pods.go:61] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:44:08.830808 1194386 system_pods.go:61] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:44:08.830812 1194386 system_pods.go:61] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:44:08.830816 1194386 system_pods.go:61] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:44:08.830819 1194386 system_pods.go:61] "etcd-ha-150891-m03" [d320cf0e-70df-42ce-8a71-b103ab62c498] Running
	I0731 22:44:08.830822 1194386 system_pods.go:61] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:44:08.830825 1194386 system_pods.go:61] "kindnet-8bkwq" [9d1ea907-d2a6-44ae-8a18-86686b21c2e6] Running
	I0731 22:44:08.830827 1194386 system_pods.go:61] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:44:08.830830 1194386 system_pods.go:61] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:44:08.830833 1194386 system_pods.go:61] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:44:08.830836 1194386 system_pods.go:61] "kube-apiserver-ha-150891-m03" [4dc100af-e2cd-4af9-a377-8486ba372988] Running
	I0731 22:44:08.830840 1194386 system_pods.go:61] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:44:08.830843 1194386 system_pods.go:61] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:44:08.830846 1194386 system_pods.go:61] "kube-controller-manager-ha-150891-m03" [f38150d3-c750-45fa-ba87-cd66a1d1bf4d] Running
	I0731 22:44:08.830849 1194386 system_pods.go:61] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:44:08.830853 1194386 system_pods.go:61] "kube-proxy-df4cg" [f225450d-1ebe-4a97-af4d-73edfb092291] Running
	I0731 22:44:08.830855 1194386 system_pods.go:61] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:44:08.830859 1194386 system_pods.go:61] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:44:08.830865 1194386 system_pods.go:61] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:44:08.830868 1194386 system_pods.go:61] "kube-scheduler-ha-150891-m03" [3c5e191f-b66b-4d95-bcdf-cf765eec91f8] Running
	I0731 22:44:08.830871 1194386 system_pods.go:61] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:44:08.830874 1194386 system_pods.go:61] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:44:08.830877 1194386 system_pods.go:61] "kube-vip-ha-150891-m03" [14435fd1-a3ab-4ca7-a5fe-3ed449a44aa2] Running
	I0731 22:44:08.830880 1194386 system_pods.go:61] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:44:08.830887 1194386 system_pods.go:74] duration metric: took 186.991142ms to wait for pod list to return data ...
	I0731 22:44:08.830898 1194386 default_sa.go:34] waiting for default service account to be created ...
	I0731 22:44:09.016334 1194386 request.go:629] Waited for 185.355154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:44:09.016408 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/default/serviceaccounts
	I0731 22:44:09.016415 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:09.016425 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:09.016429 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:09.020097 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:09.020256 1194386 default_sa.go:45] found service account: "default"
	I0731 22:44:09.020275 1194386 default_sa.go:55] duration metric: took 189.367438ms for default service account to be created ...
	I0731 22:44:09.020288 1194386 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 22:44:09.215697 1194386 request.go:629] Waited for 195.304297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:09.215777 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/namespaces/kube-system/pods
	I0731 22:44:09.215784 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:09.215795 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:09.215803 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:09.221974 1194386 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0731 22:44:09.228258 1194386 system_pods.go:86] 24 kube-system pods found
	I0731 22:44:09.228293 1194386 system_pods.go:89] "coredns-7db6d8ff4d-4928n" [258080d9-48d4-4214-a8c2-ccdd229a3a4f] Running
	I0731 22:44:09.228299 1194386 system_pods.go:89] "coredns-7db6d8ff4d-rkd4j" [b40942b0-bff9-4a49-88a3-d188d5b7dcbe] Running
	I0731 22:44:09.228306 1194386 system_pods.go:89] "etcd-ha-150891" [3f5f2e82-256b-406e-b58b-51255d338219] Running
	I0731 22:44:09.228311 1194386 system_pods.go:89] "etcd-ha-150891-m02" [d20ff7ae-a18e-476a-9f38-bf9d2eea9e32] Running
	I0731 22:44:09.228315 1194386 system_pods.go:89] "etcd-ha-150891-m03" [d320cf0e-70df-42ce-8a71-b103ab62c498] Running
	I0731 22:44:09.228319 1194386 system_pods.go:89] "kindnet-4qn8c" [4143fb96-5f2a-4107-807d-29ffbf5a95b8] Running
	I0731 22:44:09.228322 1194386 system_pods.go:89] "kindnet-8bkwq" [9d1ea907-d2a6-44ae-8a18-86686b21c2e6] Running
	I0731 22:44:09.228327 1194386 system_pods.go:89] "kindnet-bz2j7" [160def8b-f6ae-4664-8489-422121dd5a94] Running
	I0731 22:44:09.228331 1194386 system_pods.go:89] "kube-apiserver-ha-150891" [4b8aded2-d6a3-4493-ae6e-a345a4c1c872] Running
	I0731 22:44:09.228335 1194386 system_pods.go:89] "kube-apiserver-ha-150891-m02" [667b2e17-ae07-44a9-91ba-486fbacc93ae] Running
	I0731 22:44:09.228339 1194386 system_pods.go:89] "kube-apiserver-ha-150891-m03" [4dc100af-e2cd-4af9-a377-8486ba372988] Running
	I0731 22:44:09.228344 1194386 system_pods.go:89] "kube-controller-manager-ha-150891" [d3e86e76-fbc2-4732-acfc-8462570c27e4] Running
	I0731 22:44:09.228349 1194386 system_pods.go:89] "kube-controller-manager-ha-150891-m02" [952d0923-4ad6-4411-ae52-5bdfc69af65c] Running
	I0731 22:44:09.228353 1194386 system_pods.go:89] "kube-controller-manager-ha-150891-m03" [f38150d3-c750-45fa-ba87-cd66a1d1bf4d] Running
	I0731 22:44:09.228359 1194386 system_pods.go:89] "kube-proxy-9xcss" [287c0a26-1f93-4579-a5db-29b604571422] Running
	I0731 22:44:09.228364 1194386 system_pods.go:89] "kube-proxy-df4cg" [f225450d-1ebe-4a97-af4d-73edfb092291] Running
	I0731 22:44:09.228367 1194386 system_pods.go:89] "kube-proxy-nmkp9" [9253676c-a473-471b-b82e-c5e7fce39774] Running
	I0731 22:44:09.228371 1194386 system_pods.go:89] "kube-scheduler-ha-150891" [bc944154-4cb3-402d-9623-987c3acecd4c] Running
	I0731 22:44:09.228375 1194386 system_pods.go:89] "kube-scheduler-ha-150891-m02" [5e2a6e0a-df70-4e80-8f94-4a6ad47dffd9] Running
	I0731 22:44:09.228379 1194386 system_pods.go:89] "kube-scheduler-ha-150891-m03" [3c5e191f-b66b-4d95-bcdf-cf765eec91f8] Running
	I0731 22:44:09.228386 1194386 system_pods.go:89] "kube-vip-ha-150891" [1b703a99-faf3-4c2d-a871-0fb6bce0b917] Running
	I0731 22:44:09.228390 1194386 system_pods.go:89] "kube-vip-ha-150891-m02" [dc66b927-6e80-477f-9825-8385a3df1a03] Running
	I0731 22:44:09.228396 1194386 system_pods.go:89] "kube-vip-ha-150891-m03" [14435fd1-a3ab-4ca7-a5fe-3ed449a44aa2] Running
	I0731 22:44:09.228400 1194386 system_pods.go:89] "storage-provisioner" [c482636f-76e6-4ea7-9a14-3e9d6a7a4308] Running
	I0731 22:44:09.228415 1194386 system_pods.go:126] duration metric: took 208.121505ms to wait for k8s-apps to be running ...
	I0731 22:44:09.228424 1194386 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 22:44:09.228489 1194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:44:09.244628 1194386 system_svc.go:56] duration metric: took 16.191245ms WaitForService to wait for kubelet
	I0731 22:44:09.244664 1194386 kubeadm.go:582] duration metric: took 23.06783414s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:44:09.244691 1194386 node_conditions.go:102] verifying NodePressure condition ...
	I0731 22:44:09.416209 1194386 request.go:629] Waited for 171.4086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.105:8443/api/v1/nodes
	I0731 22:44:09.416274 1194386 round_trippers.go:463] GET https://192.168.39.105:8443/api/v1/nodes
	I0731 22:44:09.416279 1194386 round_trippers.go:469] Request Headers:
	I0731 22:44:09.416288 1194386 round_trippers.go:473]     Accept: application/json, */*
	I0731 22:44:09.416292 1194386 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0731 22:44:09.419797 1194386 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0731 22:44:09.421056 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:44:09.421082 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:44:09.421096 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:44:09.421101 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:44:09.421109 1194386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 22:44:09.421115 1194386 node_conditions.go:123] node cpu capacity is 2
	I0731 22:44:09.421121 1194386 node_conditions.go:105] duration metric: took 176.424174ms to run NodePressure ...
	I0731 22:44:09.421141 1194386 start.go:241] waiting for startup goroutines ...
	I0731 22:44:09.421167 1194386 start.go:255] writing updated cluster config ...
	I0731 22:44:09.421576 1194386 ssh_runner.go:195] Run: rm -f paused
	I0731 22:44:09.476792 1194386 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 22:44:09.478929 1194386 out.go:177] * Done! kubectl is now configured to use "ha-150891" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.333458078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466120333432948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dda69e38-6c42-4a78-923a-f00621fad96b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.334069526Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdfa0468-2097-40c1-86e6-97bbe94f5477 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.334188308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdfa0468-2097-40c1-86e6-97bbe94f5477 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.334445284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bdfa0468-2097-40c1-86e6-97bbe94f5477 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.372173459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=355ba092-05f5-4148-ac2b-897c8f00a880 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.372262915Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=355ba092-05f5-4148-ac2b-897c8f00a880 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.373861381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c220ea95-1bd0-41bf-abcb-4b535dfbdbe9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.374389604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466120374358746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c220ea95-1bd0-41bf-abcb-4b535dfbdbe9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.374952057Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b737cd14-3b80-4366-9f3d-583f88429b92 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.375023042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b737cd14-3b80-4366-9f3d-583f88429b92 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.375254226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b737cd14-3b80-4366-9f3d-583f88429b92 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.415204367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3da6cbd6-ed9f-4a27-8bfe-38fd44e30b3f name=/runtime.v1.RuntimeService/Version
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.415292560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3da6cbd6-ed9f-4a27-8bfe-38fd44e30b3f name=/runtime.v1.RuntimeService/Version
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.416620243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0d40c69a-c86c-40b2-aa21-c0cb6af251ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.417202954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466120417177558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0d40c69a-c86c-40b2-aa21-c0cb6af251ab name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.417809074Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0de7e8bf-7237-4a17-a234-01eead2c5470 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.417876833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0de7e8bf-7237-4a17-a234-01eead2c5470 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.418145141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0de7e8bf-7237-4a17-a234-01eead2c5470 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.460161502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d05377a-9ec8-41f8-93b0-23172a91c3cd name=/runtime.v1.RuntimeService/Version
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.460236118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d05377a-9ec8-41f8-93b0-23172a91c3cd name=/runtime.v1.RuntimeService/Version
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.461243399Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff7d6673-d4a2-42b5-ad0f-c955502b1b55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.461680283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466120461657606,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff7d6673-d4a2-42b5-ad0f-c955502b1b55 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.462329318Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18ef8bc5-c0d8-4afb-9ae4-42d5627f5850 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.462393414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18ef8bc5-c0d8-4afb-9ae4-42d5627f5850 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:48:40 ha-150891 crio[676]: time="2024-07-31 22:48:40.462616636Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722465854210187335,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26,PodSandboxId:c95c974d43c02f935f154ef6b981091f6c662790b401d88a6266673b24dc26cd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722465712275947041,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712295591607,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722465712235967440,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bf
f9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722465700317979989,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172246569
6992296263,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85,PodSandboxId:07b91077c5b52a52e9ed9f44742cd045be3e49a487c8c229488919f93ef85c58,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17224656794
18778474,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f74ae3dd0dd4606b1cdbc54e70c36a55,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86,PodSandboxId:39fb7cbb2c19921148ad6039669836e2344ee2af8050baf22644eae23cf7d866,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722465676798796740,Labels:map[stri
ng]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5,PodSandboxId:b43fbf7a4a5485d33256c1b3c49fb7b7599f768dd4f6770d51c3ce7e9011d3a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722465676805630781,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722465676786614631,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722465676790499396,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuberne
tes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18ef8bc5-c0d8-4afb-9ae4-42d5627f5850 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	17bbba80074e2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   23ff00497365e       busybox-fc5497c4f-98526
	6c2d6faeccb11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   60acb98d73509       coredns-7db6d8ff4d-4928n
	e3efb8efde2a0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   c95c974d43c02       storage-provisioner
	569d471778fea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   911e886f5312d       coredns-7db6d8ff4d-rkd4j
	6800ea54157a1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   de805f7545942       kindnet-4qn8c
	45f49431a7774       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   af4274f85760c       kube-proxy-9xcss
	8ab90b2c667e4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   07b91077c5b52       kube-vip-ha-150891
	8ae0e6eb6658d       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   b43fbf7a4a548       kube-apiserver-ha-150891
	92f65fc372a62       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   39fb7cbb2c199       kube-controller-manager-ha-150891
	31a5692b683c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   148244b8abdde       etcd-ha-150891
	c5a522e53c2bc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   015145f976eb6       kube-scheduler-ha-150891
	
	
	==> coredns [569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2] <==
	[INFO] 10.244.2.2:39965 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000115385s
	[INFO] 10.244.0.4:53269 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00369728s
	[INFO] 10.244.0.4:36211 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115481s
	[INFO] 10.244.0.4:59572 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000163023s
	[INFO] 10.244.0.4:35175 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000158586s
	[INFO] 10.244.1.2:33021 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180449s
	[INFO] 10.244.1.2:54691 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080124s
	[INFO] 10.244.1.2:59380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104324s
	[INFO] 10.244.2.2:46771 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088924s
	[INFO] 10.244.2.2:51063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242769s
	[INFO] 10.244.2.2:49935 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074586s
	[INFO] 10.244.0.4:56290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010407s
	[INFO] 10.244.0.4:57803 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109451s
	[INFO] 10.244.1.2:53651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133423s
	[INFO] 10.244.1.2:54989 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149762s
	[INFO] 10.244.1.2:55181 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079999s
	[INFO] 10.244.1.2:45949 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096277s
	[INFO] 10.244.2.2:38998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160565s
	[INFO] 10.244.2.2:55687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080958s
	[INFO] 10.244.0.4:36222 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152278s
	[INFO] 10.244.0.4:55182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115569s
	[INFO] 10.244.0.4:40749 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099022s
	[INFO] 10.244.1.2:42636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134944s
	[INFO] 10.244.1.2:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091957s
	[INFO] 10.244.1.2:39878 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081213s
	
	
	==> coredns [6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811] <==
	[INFO] 10.244.2.2:44462 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001610027s
	[INFO] 10.244.0.4:37392 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001106s
	[INFO] 10.244.0.4:45747 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144796s
	[INFO] 10.244.0.4:48856 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004798514s
	[INFO] 10.244.0.4:44718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011559s
	[INFO] 10.244.1.2:39166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153589s
	[INFO] 10.244.1.2:53738 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171146s
	[INFO] 10.244.1.2:53169 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192547s
	[INFO] 10.244.1.2:46534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001207677s
	[INFO] 10.244.1.2:40987 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092132s
	[INFO] 10.244.2.2:51004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179521s
	[INFO] 10.244.2.2:44618 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670196s
	[INFO] 10.244.2.2:34831 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094811s
	[INFO] 10.244.2.2:49392 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285273s
	[INFO] 10.244.2.2:44694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111378s
	[INFO] 10.244.0.4:58491 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160933s
	[INFO] 10.244.0.4:44490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217734s
	[INFO] 10.244.2.2:53960 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106212s
	[INFO] 10.244.2.2:47661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161869s
	[INFO] 10.244.0.4:43273 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101944s
	[INFO] 10.244.1.2:54182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187102s
	[INFO] 10.244.2.2:60067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151741s
	[INFO] 10.244.2.2:49034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160035s
	[INFO] 10.244.2.2:49392 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096218s
	[INFO] 10.244.2.2:59220 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129048s
	
	
	==> describe nodes <==
	Name:               ha-150891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_41_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:48:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:44:26 +0000   Wed, 31 Jul 2024 22:41:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-150891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a8ca2005fa042d7a84b5199ab2c7a15
	  System UUID:                6a8ca200-5fa0-42d7-a84b-5199ab2c7a15
	  Boot ID:                    2ffe06f6-f7c0-4945-b70b-2276f3221b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-98526              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-7db6d8ff4d-4928n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m4s
	  kube-system                 coredns-7db6d8ff4d-rkd4j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m4s
	  kube-system                 etcd-ha-150891                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m17s
	  kube-system                 kindnet-4qn8c                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m4s
	  kube-system                 kube-apiserver-ha-150891             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 kube-controller-manager-ha-150891    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-proxy-9xcss                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-scheduler-ha-150891             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m17s
	  kube-system                 kube-vip-ha-150891                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m20s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m3s   kube-proxy       
	  Normal  Starting                 7m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m17s  kubelet          Node ha-150891 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m17s  kubelet          Node ha-150891 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m17s  kubelet          Node ha-150891 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m4s   node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal  NodeReady                6m49s  kubelet          Node ha-150891 status is now: NodeReady
	  Normal  RegisteredNode           5m53s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal  RegisteredNode           4m41s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	
	
	Name:               ha-150891-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_42_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:42:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:45:23 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 22:44:32 +0000   Wed, 31 Jul 2024 22:46:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-150891-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1608b7369bb468b8c8c5013f81b09bb
	  System UUID:                c1608b73-69bb-468b-8c8c-5013f81b09bb
	  Boot ID:                    8dafe8a2-11cb-4840-b6a7-75e519b66bfd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cwsjc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-150891-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-bz2j7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-150891-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-controller-manager-ha-150891-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-proxy-nmkp9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-150891-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kube-vip-ha-150891-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  6m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m10s (x8 over 6m11s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m10s (x8 over 6m11s)  kubelet          Node ha-150891-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m10s (x7 over 6m11s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           5m53s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  NodeNotReady             2m36s                  node-controller  Node ha-150891-m02 status is now: NodeNotReady
	
	
	Name:               ha-150891-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_43_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:43:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:48:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:43:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:43:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:43:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:44:43 +0000   Wed, 31 Jul 2024 22:44:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-150891-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55f48101720847269fc5703e686b1c56
	  System UUID:                55f48101-7208-4726-9fc5-703e686b1c56
	  Boot ID:                    81b14277-4c6c-4d69-82a6-40f099138a1b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gzb99                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 etcd-ha-150891-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kindnet-8bkwq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m58s
	  kube-system                 kube-apiserver-ha-150891-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-controller-manager-ha-150891-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-df4cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-ha-150891-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-vip-ha-150891-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node ha-150891-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m54s                  node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	
	
	Name:               ha-150891-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_44_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:44:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:48:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:44:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:44:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:44:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:45:16 +0000   Wed, 31 Jul 2024 22:45:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-150891-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdcf2d763364b5cbf54f471f1e49c03
	  System UUID:                7bdcf2d7-6336-4b5c-bf54-f471f1e49c03
	  Boot ID:                    c717e92b-7c1b-482a-898b-ac9f84a2f188
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4ghcd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-proxy-l8srs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m55s (x2 over 3m55s)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x2 over 3m55s)  kubelet          Node ha-150891-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x2 over 3m55s)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal  NodeReady                3m35s                  kubelet          Node ha-150891-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul31 22:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.048173] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036734] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.718655] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.876549] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.547584] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 22:41] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.059402] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055698] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.187489] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.128918] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.269933] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.169130] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.879571] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.061597] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.693408] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.081387] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.056574] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.292402] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 22:42] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8] <==
	{"level":"warn","ts":"2024-07-31T22:48:40.429728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.529148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.602452Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"90e478e20277b34c","rtt":"10.429516ms","error":"dial tcp 192.168.39.224:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-31T22:48:40.602526Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"90e478e20277b34c","rtt":"927.449µs","error":"dial tcp 192.168.39.224:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-07-31T22:48:40.72867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.737789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.743914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.749986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.756297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.759792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.767348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.773495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.779334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.783496Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.78696Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.795018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.800926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.807224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.812229Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.816412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.822506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.83202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.83294Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.840546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-31T22:48:40.898011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"38dbae10e7efb596","from":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 22:48:40 up 7 min,  0 users,  load average: 0.21, 0.28, 0.15
	Linux ha-150891 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f] <==
	I0731 22:48:01.247992       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:48:11.242636       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:48:11.242677       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:48:11.242879       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:48:11.242899       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:48:11.242949       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:48:11.242966       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:48:11.243016       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:48:11.243034       1 main.go:299] handling current node
	I0731 22:48:21.243054       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:48:21.243446       1 main.go:299] handling current node
	I0731 22:48:21.243601       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:48:21.243788       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:48:21.244093       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:48:21.244182       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:48:21.244360       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:48:21.244404       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:48:31.241874       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:48:31.241978       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:48:31.242116       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:48:31.242137       1 main.go:299] handling current node
	I0731 22:48:31.242159       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:48:31.242175       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:48:31.242232       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:48:31.242249       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5] <==
	W0731 22:41:21.674980       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105]
	I0731 22:41:21.676103       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 22:41:21.681570       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 22:41:21.776540       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 22:41:23.058039       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 22:41:23.086959       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 22:41:23.107753       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 22:41:36.281096       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 22:41:36.291653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0731 22:44:15.245172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35480: use of closed network connection
	E0731 22:44:15.440060       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35500: use of closed network connection
	E0731 22:44:15.633671       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35518: use of closed network connection
	E0731 22:44:15.834481       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35548: use of closed network connection
	E0731 22:44:16.021022       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35572: use of closed network connection
	E0731 22:44:16.212498       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35596: use of closed network connection
	E0731 22:44:16.390433       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35606: use of closed network connection
	E0731 22:44:16.576806       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35620: use of closed network connection
	E0731 22:44:16.767322       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35632: use of closed network connection
	E0731 22:44:17.064161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35668: use of closed network connection
	E0731 22:44:17.243491       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35696: use of closed network connection
	E0731 22:44:17.426959       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35708: use of closed network connection
	E0731 22:44:17.608016       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35726: use of closed network connection
	E0731 22:44:17.782924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35746: use of closed network connection
	E0731 22:44:17.965215       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:35762: use of closed network connection
	W0731 22:45:41.685846       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.241]
	
	
	==> kube-controller-manager [92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86] <==
	I0731 22:43:46.262884       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m03"
	I0731 22:44:10.374961       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.920507ms"
	I0731 22:44:10.423976       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.943673ms"
	I0731 22:44:10.598390       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="174.303069ms"
	I0731 22:44:10.688164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.667837ms"
	I0731 22:44:10.712141       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.907255ms"
	I0731 22:44:10.712544       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.798µs"
	I0731 22:44:10.758206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.471521ms"
	I0731 22:44:10.758423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.491µs"
	I0731 22:44:12.205295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.069µs"
	I0731 22:44:13.011052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="81.806µs"
	I0731 22:44:13.479543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.911469ms"
	I0731 22:44:13.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.095µs"
	I0731 22:44:13.541457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.954691ms"
	I0731 22:44:13.542515       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.112µs"
	I0731 22:44:14.698025       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.750172ms"
	I0731 22:44:14.698160       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="77.42µs"
	E0731 22:44:45.863658       1 certificate_controller.go:146] Sync csr-bsh6r failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-bsh6r": the object has been modified; please apply your changes to the latest version and try again
	I0731 22:44:46.107072       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-150891-m04\" does not exist"
	I0731 22:44:46.171999       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-150891-m04" podCIDRs=["10.244.3.0/24"]
	I0731 22:44:46.272280       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m04"
	I0731 22:45:05.278325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-150891-m04"
	I0731 22:46:04.790411       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-150891-m04"
	I0731 22:46:04.941965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.384307ms"
	I0731 22:46:04.942097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.951µs"
	
	
	==> kube-proxy [45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526] <==
	I0731 22:41:37.271747       1 server_linux.go:69] "Using iptables proxy"
	I0731 22:41:37.292067       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.105"]
	I0731 22:41:37.329876       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 22:41:37.329936       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 22:41:37.329954       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:41:37.333189       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:41:37.333799       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:41:37.333827       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:41:37.335327       1 config.go:192] "Starting service config controller"
	I0731 22:41:37.335755       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:41:37.335806       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:41:37.335822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:41:37.336508       1 config.go:319] "Starting node config controller"
	I0731 22:41:37.336539       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:41:37.436754       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:41:37.436810       1 shared_informer.go:320] Caches are synced for service config
	I0731 22:41:37.436865       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78] <==
	W0731 22:41:21.061752       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.061892       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.079018       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:41:21.079065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:41:21.088068       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.088125       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.163742       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.163825       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.239473       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 22:41:21.239521       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 22:41:21.346622       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 22:41:21.346664       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 22:41:24.138667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 22:43:42.312042       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-df4cg\": pod kube-proxy-df4cg is already assigned to node \"ha-150891-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-df4cg" node="ha-150891-m03"
	E0731 22:43:42.312149       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-df4cg\": pod kube-proxy-df4cg is already assigned to node \"ha-150891-m03\"" pod="kube-system/kube-proxy-df4cg"
	E0731 22:43:42.318233       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8bkwq\": pod kindnet-8bkwq is already assigned to node \"ha-150891-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-8bkwq" node="ha-150891-m03"
	E0731 22:43:42.318296       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9d1ea907-d2a6-44ae-8a18-86686b21c2e6(kube-system/kindnet-8bkwq) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-8bkwq"
	E0731 22:43:42.318311       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8bkwq\": pod kindnet-8bkwq is already assigned to node \"ha-150891-m03\"" pod="kube-system/kindnet-8bkwq"
	I0731 22:43:42.318329       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8bkwq" node="ha-150891-m03"
	E0731 22:44:46.183027       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-djfjt\": pod kindnet-djfjt is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-djfjt" node="ha-150891-m04"
	E0731 22:44:46.183131       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-djfjt\": pod kindnet-djfjt is already assigned to node \"ha-150891-m04\"" pod="kube-system/kindnet-djfjt"
	E0731 22:44:46.227608       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4ghcd\": pod kindnet-4ghcd is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4ghcd" node="ha-150891-m04"
	E0731 22:44:46.227760       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4ghcd\": pod kindnet-4ghcd is already assigned to node \"ha-150891-m04\"" pod="kube-system/kindnet-4ghcd"
	E0731 22:44:46.228158       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-5wxdl\": pod kube-proxy-5wxdl is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-5wxdl" node="ha-150891-m04"
	E0731 22:44:46.228265       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-5wxdl\": pod kube-proxy-5wxdl is already assigned to node \"ha-150891-m04\"" pod="kube-system/kube-proxy-5wxdl"
	
	
	==> kubelet <==
	Jul 31 22:44:23 ha-150891 kubelet[1359]: E0731 22:44:23.027458    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:44:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:44:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:44:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:44:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:45:23 ha-150891 kubelet[1359]: E0731 22:45:23.031118    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:45:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:45:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:45:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:45:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:46:23 ha-150891 kubelet[1359]: E0731 22:46:23.027472    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:46:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:46:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:46:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:46:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:47:23 ha-150891 kubelet[1359]: E0731 22:47:23.026828    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:47:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:47:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:47:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:47:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:48:23 ha-150891 kubelet[1359]: E0731 22:48:23.027826    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:48:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:48:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:48:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:48:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-150891 -n ha-150891
helpers_test.go:261: (dbg) Run:  kubectl --context ha-150891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-150891 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-150891 -v=7 --alsologtostderr
E0731 22:49:53.720709 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:50:21.406752 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-150891 -v=7 --alsologtostderr: exit status 82 (2m1.854829164s)

                                                
                                                
-- stdout --
	* Stopping node "ha-150891-m04"  ...
	* Stopping node "ha-150891-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:48:42.357082 1200116 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:48:42.357245 1200116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:42.357254 1200116 out.go:304] Setting ErrFile to fd 2...
	I0731 22:48:42.357259 1200116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:48:42.357445 1200116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:48:42.358138 1200116 out.go:298] Setting JSON to false
	I0731 22:48:42.358262 1200116 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:42.359243 1200116 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:42.359380 1200116 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:48:42.359610 1200116 mustload.go:65] Loading cluster: ha-150891
	I0731 22:48:42.359769 1200116 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:48:42.359823 1200116 stop.go:39] StopHost: ha-150891-m04
	I0731 22:48:42.360262 1200116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:42.360326 1200116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:42.375920 1200116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39543
	I0731 22:48:42.376460 1200116 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:42.377091 1200116 main.go:141] libmachine: Using API Version  1
	I0731 22:48:42.377118 1200116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:42.377451 1200116 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:42.379846 1200116 out.go:177] * Stopping node "ha-150891-m04"  ...
	I0731 22:48:42.381349 1200116 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 22:48:42.381398 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:48:42.381732 1200116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 22:48:42.381760 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:48:42.384698 1200116 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:42.385108 1200116 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:44:32 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:48:42.385136 1200116 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:48:42.385321 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:48:42.385502 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:48:42.385662 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:48:42.385851 1200116 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:48:42.470716 1200116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 22:48:42.524682 1200116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 22:48:42.577902 1200116 main.go:141] libmachine: Stopping "ha-150891-m04"...
	I0731 22:48:42.577941 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:42.579457 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .Stop
	I0731 22:48:42.583892 1200116 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 0/120
	I0731 22:48:43.718194 1200116 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:48:43.719780 1200116 main.go:141] libmachine: Machine "ha-150891-m04" was stopped.
	I0731 22:48:43.719810 1200116 stop.go:75] duration metric: took 1.338468647s to stop
	I0731 22:48:43.719836 1200116 stop.go:39] StopHost: ha-150891-m03
	I0731 22:48:43.720202 1200116 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:48:43.720260 1200116 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:48:43.736068 1200116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
	I0731 22:48:43.736703 1200116 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:48:43.737272 1200116 main.go:141] libmachine: Using API Version  1
	I0731 22:48:43.737296 1200116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:48:43.737623 1200116 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:48:43.740365 1200116 out.go:177] * Stopping node "ha-150891-m03"  ...
	I0731 22:48:43.741590 1200116 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 22:48:43.741625 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .DriverName
	I0731 22:48:43.741943 1200116 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 22:48:43.741970 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHHostname
	I0731 22:48:43.745348 1200116 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:43.745894 1200116 main.go:141] libmachine: (ha-150891-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f8:ec:6d", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:43:09 +0000 UTC Type:0 Mac:52:54:00:f8:ec:6d Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-150891-m03 Clientid:01:52:54:00:f8:ec:6d}
	I0731 22:48:43.745939 1200116 main.go:141] libmachine: (ha-150891-m03) DBG | domain ha-150891-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:f8:ec:6d in network mk-ha-150891
	I0731 22:48:43.746130 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHPort
	I0731 22:48:43.746330 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHKeyPath
	I0731 22:48:43.746490 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .GetSSHUsername
	I0731 22:48:43.746663 1200116 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m03/id_rsa Username:docker}
	I0731 22:48:43.833447 1200116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 22:48:43.887199 1200116 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 22:48:43.941249 1200116 main.go:141] libmachine: Stopping "ha-150891-m03"...
	I0731 22:48:43.941299 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .GetState
	I0731 22:48:43.942842 1200116 main.go:141] libmachine: (ha-150891-m03) Calling .Stop
	I0731 22:48:43.946640 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 0/120
	I0731 22:48:44.948613 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 1/120
	I0731 22:48:45.950375 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 2/120
	I0731 22:48:46.952884 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 3/120
	I0731 22:48:47.954523 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 4/120
	I0731 22:48:48.956834 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 5/120
	I0731 22:48:49.958811 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 6/120
	I0731 22:48:50.960374 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 7/120
	I0731 22:48:51.962133 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 8/120
	I0731 22:48:52.963856 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 9/120
	I0731 22:48:53.966027 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 10/120
	I0731 22:48:54.967650 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 11/120
	I0731 22:48:55.969128 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 12/120
	I0731 22:48:56.970791 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 13/120
	I0731 22:48:57.972469 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 14/120
	I0731 22:48:58.974591 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 15/120
	I0731 22:48:59.976158 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 16/120
	I0731 22:49:00.977737 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 17/120
	I0731 22:49:01.979593 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 18/120
	I0731 22:49:02.981159 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 19/120
	I0731 22:49:03.983317 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 20/120
	I0731 22:49:04.984876 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 21/120
	I0731 22:49:05.986643 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 22/120
	I0731 22:49:06.988329 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 23/120
	I0731 22:49:07.990150 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 24/120
	I0731 22:49:08.992273 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 25/120
	I0731 22:49:09.994050 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 26/120
	I0731 22:49:10.995482 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 27/120
	I0731 22:49:11.997004 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 28/120
	I0731 22:49:12.998786 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 29/120
	I0731 22:49:14.001081 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 30/120
	I0731 22:49:15.002570 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 31/120
	I0731 22:49:16.004244 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 32/120
	I0731 22:49:17.005750 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 33/120
	I0731 22:49:18.007478 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 34/120
	I0731 22:49:19.009608 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 35/120
	I0731 22:49:20.011164 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 36/120
	I0731 22:49:21.012772 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 37/120
	I0731 22:49:22.014269 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 38/120
	I0731 22:49:23.015891 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 39/120
	I0731 22:49:24.018104 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 40/120
	I0731 22:49:25.019639 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 41/120
	I0731 22:49:26.021225 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 42/120
	I0731 22:49:27.023270 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 43/120
	I0731 22:49:28.024911 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 44/120
	I0731 22:49:29.026834 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 45/120
	I0731 22:49:30.028387 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 46/120
	I0731 22:49:31.030798 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 47/120
	I0731 22:49:32.032451 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 48/120
	I0731 22:49:33.033898 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 49/120
	I0731 22:49:34.035756 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 50/120
	I0731 22:49:35.037394 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 51/120
	I0731 22:49:36.038981 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 52/120
	I0731 22:49:37.040491 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 53/120
	I0731 22:49:38.042928 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 54/120
	I0731 22:49:39.045121 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 55/120
	I0731 22:49:40.046527 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 56/120
	I0731 22:49:41.047948 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 57/120
	I0731 22:49:42.049606 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 58/120
	I0731 22:49:43.051246 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 59/120
	I0731 22:49:44.053747 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 60/120
	I0731 22:49:45.055268 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 61/120
	I0731 22:49:46.056949 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 62/120
	I0731 22:49:47.058288 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 63/120
	I0731 22:49:48.059956 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 64/120
	I0731 22:49:49.061576 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 65/120
	I0731 22:49:50.063308 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 66/120
	I0731 22:49:51.064911 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 67/120
	I0731 22:49:52.066546 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 68/120
	I0731 22:49:53.068657 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 69/120
	I0731 22:49:54.070372 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 70/120
	I0731 22:49:55.071892 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 71/120
	I0731 22:49:56.073340 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 72/120
	I0731 22:49:57.074961 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 73/120
	I0731 22:49:58.076345 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 74/120
	I0731 22:49:59.078167 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 75/120
	I0731 22:50:00.080367 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 76/120
	I0731 22:50:01.082600 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 77/120
	I0731 22:50:02.083985 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 78/120
	I0731 22:50:03.085355 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 79/120
	I0731 22:50:04.087489 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 80/120
	I0731 22:50:05.088959 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 81/120
	I0731 22:50:06.090681 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 82/120
	I0731 22:50:07.092055 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 83/120
	I0731 22:50:08.094393 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 84/120
	I0731 22:50:09.096059 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 85/120
	I0731 22:50:10.097515 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 86/120
	I0731 22:50:11.099038 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 87/120
	I0731 22:50:12.100764 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 88/120
	I0731 22:50:13.102371 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 89/120
	I0731 22:50:14.104555 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 90/120
	I0731 22:50:15.106009 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 91/120
	I0731 22:50:16.107566 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 92/120
	I0731 22:50:17.109024 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 93/120
	I0731 22:50:18.110625 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 94/120
	I0731 22:50:19.112923 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 95/120
	I0731 22:50:20.114318 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 96/120
	I0731 22:50:21.115885 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 97/120
	I0731 22:50:22.117675 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 98/120
	I0731 22:50:23.119411 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 99/120
	I0731 22:50:24.121558 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 100/120
	I0731 22:50:25.123015 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 101/120
	I0731 22:50:26.124675 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 102/120
	I0731 22:50:27.126273 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 103/120
	I0731 22:50:28.127962 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 104/120
	I0731 22:50:29.129537 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 105/120
	I0731 22:50:30.131505 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 106/120
	I0731 22:50:31.133151 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 107/120
	I0731 22:50:32.134869 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 108/120
	I0731 22:50:33.136275 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 109/120
	I0731 22:50:34.138345 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 110/120
	I0731 22:50:35.140183 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 111/120
	I0731 22:50:36.141703 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 112/120
	I0731 22:50:37.143434 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 113/120
	I0731 22:50:38.144958 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 114/120
	I0731 22:50:39.147197 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 115/120
	I0731 22:50:40.149146 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 116/120
	I0731 22:50:41.150861 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 117/120
	I0731 22:50:42.152608 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 118/120
	I0731 22:50:43.154165 1200116 main.go:141] libmachine: (ha-150891-m03) Waiting for machine to stop 119/120
	I0731 22:50:44.155462 1200116 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 22:50:44.155554 1200116 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 22:50:44.158064 1200116 out.go:177] 
	W0731 22:50:44.159764 1200116 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 22:50:44.159801 1200116 out.go:239] * 
	* 
	W0731 22:50:44.164620 1200116 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 22:50:44.165919 1200116 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-150891 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150891 --wait=true -v=7 --alsologtostderr
E0731 22:54:53.719961 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-150891 --wait=true -v=7 --alsologtostderr: (4m12.107683706s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-150891
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-150891 -n ha-150891
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-150891 logs -n 25: (1.7421825s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m04 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp testdata/cp-test.txt                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891:/home/docker/cp-test_ha-150891-m04_ha-150891.txt                       |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891 sudo cat                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891.txt                                 |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03:/home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m03 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-150891 node stop m02 -v=7                                                     | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-150891 node start m02 -v=7                                                    | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-150891 -v=7                                                           | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-150891 -v=7                                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-150891 --wait=true -v=7                                                    | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:50 UTC | 31 Jul 24 22:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-150891                                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:54 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:50:44
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:50:44.217335 1200572 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:50:44.217474 1200572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:50:44.217486 1200572 out.go:304] Setting ErrFile to fd 2...
	I0731 22:50:44.217492 1200572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:50:44.217728 1200572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:50:44.218344 1200572 out.go:298] Setting JSON to false
	I0731 22:50:44.219468 1200572 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":23595,"bootTime":1722442649,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 22:50:44.219551 1200572 start.go:139] virtualization: kvm guest
	I0731 22:50:44.222010 1200572 out.go:177] * [ha-150891] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 22:50:44.223485 1200572 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:50:44.223525 1200572 notify.go:220] Checking for updates...
	I0731 22:50:44.225862 1200572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:50:44.227351 1200572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:50:44.228772 1200572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:50:44.230081 1200572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 22:50:44.231320 1200572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:50:44.232947 1200572 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:50:44.233080 1200572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:50:44.233554 1200572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:50:44.233618 1200572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:50:44.249691 1200572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0731 22:50:44.250159 1200572 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:50:44.250777 1200572 main.go:141] libmachine: Using API Version  1
	I0731 22:50:44.250797 1200572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:50:44.251178 1200572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:50:44.251395 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:50:44.291027 1200572 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 22:50:44.292159 1200572 start.go:297] selected driver: kvm2
	I0731 22:50:44.292178 1200572 start.go:901] validating driver "kvm2" against &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:50:44.292361 1200572 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:50:44.292799 1200572 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:50:44.292901 1200572 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 22:50:44.310219 1200572 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 22:50:44.310958 1200572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:50:44.311021 1200572 cni.go:84] Creating CNI manager for ""
	I0731 22:50:44.311030 1200572 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 22:50:44.311084 1200572 start.go:340] cluster config:
	{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:50:44.311235 1200572 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:50:44.314540 1200572 out.go:177] * Starting "ha-150891" primary control-plane node in "ha-150891" cluster
	I0731 22:50:44.315710 1200572 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:50:44.315752 1200572 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 22:50:44.315767 1200572 cache.go:56] Caching tarball of preloaded images
	I0731 22:50:44.315873 1200572 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:50:44.315887 1200572 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:50:44.316034 1200572 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:50:44.316292 1200572 start.go:360] acquireMachinesLock for ha-150891: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:50:44.316346 1200572 start.go:364] duration metric: took 29.338µs to acquireMachinesLock for "ha-150891"
	I0731 22:50:44.316372 1200572 start.go:96] Skipping create...Using existing machine configuration
	I0731 22:50:44.316381 1200572 fix.go:54] fixHost starting: 
	I0731 22:50:44.316666 1200572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:50:44.316708 1200572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:50:44.332415 1200572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0731 22:50:44.332899 1200572 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:50:44.333454 1200572 main.go:141] libmachine: Using API Version  1
	I0731 22:50:44.333480 1200572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:50:44.333890 1200572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:50:44.334126 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:50:44.334267 1200572 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:50:44.335933 1200572 fix.go:112] recreateIfNeeded on ha-150891: state=Running err=<nil>
	W0731 22:50:44.335969 1200572 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 22:50:44.337621 1200572 out.go:177] * Updating the running kvm2 "ha-150891" VM ...
	I0731 22:50:44.338567 1200572 machine.go:94] provisionDockerMachine start ...
	I0731 22:50:44.338594 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:50:44.338912 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.341836 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.342373 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.342405 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.342593 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:44.342817 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.342978 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.343102 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:44.343288 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:44.343560 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:44.343578 1200572 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:50:44.460662 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891
	
	I0731 22:50:44.460700 1200572 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:50:44.460965 1200572 buildroot.go:166] provisioning hostname "ha-150891"
	I0731 22:50:44.460997 1200572 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:50:44.461226 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.463952 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.464344 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.464370 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.464552 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:44.464780 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.464915 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.465070 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:44.465210 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:44.465414 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:44.465436 1200572 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891 && echo "ha-150891" | sudo tee /etc/hostname
	I0731 22:50:44.595007 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891
	
	I0731 22:50:44.595041 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.598178 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.598600 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.598625 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.598818 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:44.599023 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.599221 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.599359 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:44.599530 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:44.599704 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:44.599725 1200572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:50:44.713075 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:50:44.713106 1200572 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:50:44.713152 1200572 buildroot.go:174] setting up certificates
	I0731 22:50:44.713163 1200572 provision.go:84] configureAuth start
	I0731 22:50:44.713175 1200572 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:50:44.713515 1200572 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:50:44.716296 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.716765 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.716790 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.717002 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.719564 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.719960 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.719992 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.720215 1200572 provision.go:143] copyHostCerts
	I0731 22:50:44.720246 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:50:44.720295 1200572 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:50:44.720316 1200572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:50:44.720402 1200572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:50:44.720497 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:50:44.720523 1200572 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:50:44.720530 1200572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:50:44.720569 1200572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:50:44.720635 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:50:44.720660 1200572 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:50:44.720669 1200572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:50:44.720698 1200572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:50:44.720771 1200572 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891 san=[127.0.0.1 192.168.39.105 ha-150891 localhost minikube]
	I0731 22:50:45.005447 1200572 provision.go:177] copyRemoteCerts
	I0731 22:50:45.005523 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:50:45.005557 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:45.008626 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.009052 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:45.009084 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.009291 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:45.009582 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:45.009762 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:45.009918 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:50:45.098405 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:50:45.098501 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:50:45.129746 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:50:45.129838 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 22:50:45.157306 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:50:45.157401 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:50:45.182984 1200572 provision.go:87] duration metric: took 469.803186ms to configureAuth
	I0731 22:50:45.183018 1200572 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:50:45.183243 1200572 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:50:45.183320 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:45.186016 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.186348 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:45.186371 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.186571 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:45.186792 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:45.186965 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:45.187160 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:45.187337 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:45.187574 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:45.187596 1200572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:52:15.949999 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:52:15.950044 1200572 machine.go:97] duration metric: took 1m31.611461705s to provisionDockerMachine
	I0731 22:52:15.950058 1200572 start.go:293] postStartSetup for "ha-150891" (driver="kvm2")
	I0731 22:52:15.950069 1200572 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:52:15.950117 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:15.950537 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:52:15.950569 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:15.953909 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:15.954490 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:15.954522 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:15.954811 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:15.955072 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:15.955324 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:15.955521 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:52:16.042723 1200572 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:52:16.047266 1200572 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:52:16.047302 1200572 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:52:16.047380 1200572 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:52:16.047461 1200572 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:52:16.047473 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:52:16.047568 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:52:16.057390 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:52:16.082401 1200572 start.go:296] duration metric: took 132.326259ms for postStartSetup
	I0731 22:52:16.082456 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.082808 1200572 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 22:52:16.082837 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.086003 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.086442 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.086466 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.086734 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.086958 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.087196 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.087362 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	W0731 22:52:16.174414 1200572 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 22:52:16.174454 1200572 fix.go:56] duration metric: took 1m31.858073511s for fixHost
	I0731 22:52:16.174479 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.177315 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.177738 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.177762 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.177982 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.178213 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.178388 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.178510 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.178711 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:52:16.178886 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:52:16.178897 1200572 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:52:16.293066 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722466336.264134929
	
	I0731 22:52:16.293097 1200572 fix.go:216] guest clock: 1722466336.264134929
	I0731 22:52:16.293107 1200572 fix.go:229] Guest: 2024-07-31 22:52:16.264134929 +0000 UTC Remote: 2024-07-31 22:52:16.174461343 +0000 UTC m=+91.996620433 (delta=89.673586ms)
	I0731 22:52:16.293139 1200572 fix.go:200] guest clock delta is within tolerance: 89.673586ms
	I0731 22:52:16.293147 1200572 start.go:83] releasing machines lock for "ha-150891", held for 1m31.97678769s
	I0731 22:52:16.293174 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.293527 1200572 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:52:16.296331 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.296818 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.296846 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.297082 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.297757 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.297976 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.298085 1200572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:52:16.298146 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.298231 1200572 ssh_runner.go:195] Run: cat /version.json
	I0731 22:52:16.298259 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.301164 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.301376 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.301549 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.301578 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.301697 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.301874 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.301885 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.301903 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.302057 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.302133 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.302251 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:52:16.302277 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.302414 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.302580 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:52:16.404151 1200572 ssh_runner.go:195] Run: systemctl --version
	I0731 22:52:16.410125 1200572 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:52:16.571040 1200572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:52:16.579567 1200572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:52:16.579664 1200572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:52:16.589227 1200572 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 22:52:16.589262 1200572 start.go:495] detecting cgroup driver to use...
	I0731 22:52:16.589364 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:52:16.606500 1200572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:52:16.620790 1200572 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:52:16.620883 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:52:16.635400 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:52:16.650021 1200572 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:52:16.800140 1200572 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:52:16.961235 1200572 docker.go:233] disabling docker service ...
	I0731 22:52:16.961312 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:52:16.981803 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:52:16.996566 1200572 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:52:17.150113 1200572 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:52:17.316349 1200572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:52:17.330875 1200572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:52:17.349762 1200572 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:52:17.349831 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.360749 1200572 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:52:17.360831 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.371473 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.382561 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.393503 1200572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:52:17.404740 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.415972 1200572 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.427390 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.438419 1200572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:52:17.448575 1200572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:52:17.459000 1200572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:52:17.606614 1200572 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:52:17.894404 1200572 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:52:17.894491 1200572 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:52:17.899642 1200572 start.go:563] Will wait 60s for crictl version
	I0731 22:52:17.899710 1200572 ssh_runner.go:195] Run: which crictl
	I0731 22:52:17.903650 1200572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:52:17.937148 1200572 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:52:17.937237 1200572 ssh_runner.go:195] Run: crio --version
	I0731 22:52:17.965264 1200572 ssh_runner.go:195] Run: crio --version
	I0731 22:52:17.997598 1200572 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:52:17.999122 1200572 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:52:18.002319 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:18.002820 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:18.002846 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:18.003045 1200572 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:52:18.008132 1200572 kubeadm.go:883] updating cluster {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:52:18.008325 1200572 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:52:18.008413 1200572 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:52:18.052964 1200572 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:52:18.053007 1200572 crio.go:433] Images already preloaded, skipping extraction
	I0731 22:52:18.053077 1200572 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:52:18.091529 1200572 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:52:18.091558 1200572 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:52:18.091568 1200572 kubeadm.go:934] updating node { 192.168.39.105 8443 v1.30.3 crio true true} ...
	I0731 22:52:18.091680 1200572 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:52:18.091769 1200572 ssh_runner.go:195] Run: crio config
	I0731 22:52:18.146927 1200572 cni.go:84] Creating CNI manager for ""
	I0731 22:52:18.146949 1200572 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 22:52:18.146959 1200572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:52:18.146984 1200572 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-150891 NodeName:ha-150891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:52:18.147143 1200572 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-150891"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:52:18.147164 1200572 kube-vip.go:115] generating kube-vip config ...
	I0731 22:52:18.147222 1200572 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:52:18.159526 1200572 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:52:18.159649 1200572 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:52:18.159740 1200572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:52:18.170825 1200572 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:52:18.170898 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 22:52:18.181163 1200572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 22:52:18.198318 1200572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:52:18.215416 1200572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 22:52:18.232948 1200572 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 22:52:18.250767 1200572 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:52:18.254890 1200572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:52:18.414341 1200572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:52:18.430067 1200572 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.105
	I0731 22:52:18.430091 1200572 certs.go:194] generating shared ca certs ...
	I0731 22:52:18.430109 1200572 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:52:18.430293 1200572 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:52:18.430337 1200572 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:52:18.430350 1200572 certs.go:256] generating profile certs ...
	I0731 22:52:18.430441 1200572 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:52:18.430470 1200572 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97
	I0731 22:52:18.430485 1200572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.224 192.168.39.241 192.168.39.254]
	I0731 22:52:18.561064 1200572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97 ...
	I0731 22:52:18.561102 1200572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97: {Name:mk2ff593ee3e47083d976067ae0ef73087f1db96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:52:18.561292 1200572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97 ...
	I0731 22:52:18.561305 1200572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97: {Name:mk9358e1f80e93c54df5c399710f0e6123bbc559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:52:18.561382 1200572 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:52:18.561550 1200572 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:52:18.561686 1200572 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:52:18.561702 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:52:18.561715 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:52:18.561728 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:52:18.561738 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:52:18.561750 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:52:18.561764 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:52:18.561775 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:52:18.561789 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:52:18.561835 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:52:18.561877 1200572 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:52:18.561886 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:52:18.561906 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:52:18.561927 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:52:18.561947 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:52:18.561985 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:52:18.562009 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.562024 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.562037 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.562633 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:52:18.588138 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:52:18.612880 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:52:18.638269 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:52:18.663895 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 22:52:18.689022 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:52:18.713785 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:52:18.738914 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:52:18.763986 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:52:18.788897 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:52:18.814007 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:52:18.839761 1200572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:52:18.856902 1200572 ssh_runner.go:195] Run: openssl version
	I0731 22:52:18.862812 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:52:18.873944 1200572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.878994 1200572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.879078 1200572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.885217 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:52:18.895040 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:52:18.906615 1200572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.911720 1200572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.911800 1200572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.917966 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:52:18.928071 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:52:18.939371 1200572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.944154 1200572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.944239 1200572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.950023 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:52:18.959669 1200572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:52:18.964562 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 22:52:18.970429 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 22:52:18.976425 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 22:52:18.982244 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 22:52:18.988035 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 22:52:18.993919 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 22:52:18.999811 1200572 kubeadm.go:392] StartCluster: {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:52:18.999949 1200572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 22:52:19.000020 1200572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 22:52:19.036133 1200572 cri.go:89] found id: "e32bcdbc931f7a75c1f40f7c3839d94e018c6c9beb067b341eaf6f7f2855661d"
	I0731 22:52:19.036170 1200572 cri.go:89] found id: "09367b7e537fc53bc59177ce2dd80ed599a9b96efdacdc59b8d5043c37b1200c"
	I0731 22:52:19.036177 1200572 cri.go:89] found id: "aa312cf0b6219dcc4a642e96d32d4947f9a59f82178020e3ce208a74292c12c5"
	I0731 22:52:19.036183 1200572 cri.go:89] found id: "6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811"
	I0731 22:52:19.036187 1200572 cri.go:89] found id: "e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26"
	I0731 22:52:19.036191 1200572 cri.go:89] found id: "569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2"
	I0731 22:52:19.036196 1200572 cri.go:89] found id: "6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f"
	I0731 22:52:19.036199 1200572 cri.go:89] found id: "45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526"
	I0731 22:52:19.036203 1200572 cri.go:89] found id: "8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85"
	I0731 22:52:19.036227 1200572 cri.go:89] found id: "8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5"
	I0731 22:52:19.036245 1200572 cri.go:89] found id: "92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86"
	I0731 22:52:19.036250 1200572 cri.go:89] found id: "31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8"
	I0731 22:52:19.036258 1200572 cri.go:89] found id: "c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78"
	I0731 22:52:19.036262 1200572 cri.go:89] found id: ""
	I0731 22:52:19.036324 1200572 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.014471170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466497014448178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f443b0a-a8cc-4363-bafd-e09e022db480 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.015071484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=428c9760-4cd0-4370-81aa-983c68f236ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.015183580Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=428c9760-4cd0-4370-81aa-983c68f236ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.015637464Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=428c9760-4cd0-4370-81aa-983c68f236ff name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.063444932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=141f1e13-d291-44b9-93ad-72aa9ed4c78a name=/runtime.v1.RuntimeService/Version
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.063530274Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=141f1e13-d291-44b9-93ad-72aa9ed4c78a name=/runtime.v1.RuntimeService/Version
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.064907647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7e77073-eded-494e-8422-e96b8b4c7623 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.065519689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466497065494176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7e77073-eded-494e-8422-e96b8b4c7623 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.066292929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=543d712e-1f97-4f09-892d-a70983b81169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.066355375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=543d712e-1f97-4f09-892d-a70983b81169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.067001679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=543d712e-1f97-4f09-892d-a70983b81169 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.113900141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=200f6863-9672-4d89-932a-7d9392911e96 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.114161032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=200f6863-9672-4d89-932a-7d9392911e96 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.118745734Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2f1658f4-6e46-453f-a907-6a957caf5c4e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.119752641Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466497119674732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2f1658f4-6e46-453f-a907-6a957caf5c4e name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.120533827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf1f49e5-6b8b-406b-aec6-88e164cb6e18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.120624880Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf1f49e5-6b8b-406b-aec6-88e164cb6e18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.121096197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf1f49e5-6b8b-406b-aec6-88e164cb6e18 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.162202634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=115c3d93-d81c-4409-aa92-55dc9fd1dc83 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.162296305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=115c3d93-d81c-4409-aa92-55dc9fd1dc83 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.163656567Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=983cd4b9-127b-4ef8-960b-575328e3b978 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.164130513Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466497164104217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=983cd4b9-127b-4ef8-960b-575328e3b978 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.164904320Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76df87eb-2c0c-4505-8401-fa89540becaa name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.164982351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76df87eb-2c0c-4505-8401-fa89540becaa name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:54:57 ha-150891 crio[3719]: time="2024-07-31 22:54:57.165447568Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76df87eb-2c0c-4505-8401-fa89540becaa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7e523a71817c2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   3978400018cf7       storage-provisioner
	936da742737fd       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   0e42ebfba53c1       kube-apiserver-ha-150891
	3ff53188db11c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   9df1481757b37       kube-controller-manager-ha-150891
	8ba788340850a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   3978400018cf7       storage-provisioner
	2f0c62213b209       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   6c486d06f45f1       busybox-fc5497c4f-98526
	759aef3785027       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   b5a9d2dab2885       kube-vip-ha-150891
	99532a403baeb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   fa08797866ea7       coredns-7db6d8ff4d-4928n
	2138e0d9e3344       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   fa3085df7464a       kindnet-4qn8c
	b9adf2f762249       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   eedad7f368c86       kube-proxy-9xcss
	12d90c4a99cb0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   01adb66995606       coredns-7db6d8ff4d-rkd4j
	3fc5a6318a06b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   0e42ebfba53c1       kube-apiserver-ha-150891
	533eebe6788d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   07d745111be0c       etcd-ha-150891
	a13dc8c74ad52       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   9df1481757b37       kube-controller-manager-ha-150891
	e5db421039fa4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   b7a5cd34a635a       kube-scheduler-ha-150891
	17bbba80074e2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   23ff00497365e       busybox-fc5497c4f-98526
	6c2d6faeccb11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   60acb98d73509       coredns-7db6d8ff4d-4928n
	569d471778fea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   911e886f5312d       coredns-7db6d8ff4d-rkd4j
	6800ea54157a1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   de805f7545942       kindnet-4qn8c
	45f49431a7774       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   af4274f85760c       kube-proxy-9xcss
	31a5692b683c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   148244b8abdde       etcd-ha-150891
	c5a522e53c2bc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      13 minutes ago       Exited              kube-scheduler            0                   015145f976eb6       kube-scheduler-ha-150891
	
	
	==> coredns [12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[246625185]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 22:52:35.090) (total time: 10000ms):
	Trace[246625185]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (22:52:45.091)
	Trace[246625185]: [10.000863103s] [10.000863103s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43382->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[197213444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 22:52:40.937) (total time: 10064ms):
	Trace[197213444]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43382->10.96.0.1:443: read: connection reset by peer 10064ms (22:52:51.002)
	Trace[197213444]: [10.064702475s] [10.064702475s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43382->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2] <==
	[INFO] 10.244.1.2:33021 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180449s
	[INFO] 10.244.1.2:54691 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080124s
	[INFO] 10.244.1.2:59380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104324s
	[INFO] 10.244.2.2:46771 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088924s
	[INFO] 10.244.2.2:51063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242769s
	[INFO] 10.244.2.2:49935 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074586s
	[INFO] 10.244.0.4:56290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010407s
	[INFO] 10.244.0.4:57803 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109451s
	[INFO] 10.244.1.2:53651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133423s
	[INFO] 10.244.1.2:54989 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149762s
	[INFO] 10.244.1.2:55181 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079999s
	[INFO] 10.244.1.2:45949 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096277s
	[INFO] 10.244.2.2:38998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160565s
	[INFO] 10.244.2.2:55687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080958s
	[INFO] 10.244.0.4:36222 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152278s
	[INFO] 10.244.0.4:55182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115569s
	[INFO] 10.244.0.4:40749 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099022s
	[INFO] 10.244.1.2:42636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134944s
	[INFO] 10.244.1.2:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091957s
	[INFO] 10.244.1.2:39878 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1886&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1886&timeout=5m21s&timeoutSeconds=321&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1883&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811] <==
	[INFO] 10.244.0.4:44718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011559s
	[INFO] 10.244.1.2:39166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153589s
	[INFO] 10.244.1.2:53738 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171146s
	[INFO] 10.244.1.2:53169 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192547s
	[INFO] 10.244.1.2:46534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001207677s
	[INFO] 10.244.1.2:40987 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092132s
	[INFO] 10.244.2.2:51004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179521s
	[INFO] 10.244.2.2:44618 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670196s
	[INFO] 10.244.2.2:34831 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094811s
	[INFO] 10.244.2.2:49392 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285273s
	[INFO] 10.244.2.2:44694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111378s
	[INFO] 10.244.0.4:58491 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160933s
	[INFO] 10.244.0.4:44490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217734s
	[INFO] 10.244.2.2:53960 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106212s
	[INFO] 10.244.2.2:47661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161869s
	[INFO] 10.244.0.4:43273 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101944s
	[INFO] 10.244.1.2:54182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187102s
	[INFO] 10.244.2.2:60067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151741s
	[INFO] 10.244.2.2:49034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160035s
	[INFO] 10.244.2.2:49392 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096218s
	[INFO] 10.244.2.2:59220 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129048s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1869&timeout=5m52s&timeoutSeconds=352&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1886&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf] <==
	[INFO] plugin/kubernetes: Trace[1710803375]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 22:52:30.801) (total time: 10001ms):
	Trace[1710803375]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:52:40.802)
	Trace[1710803375]: [10.001213607s] [10.001213607s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47614->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47614->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-150891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_41_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:54:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-150891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a8ca2005fa042d7a84b5199ab2c7a15
	  System UUID:                6a8ca200-5fa0-42d7-a84b-5199ab2c7a15
	  Boot ID:                    2ffe06f6-f7c0-4945-b70b-2276f3221b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-98526              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-4928n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-rkd4j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-150891                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-4qn8c                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-150891             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-150891    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-9xcss                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-150891             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-150891                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 108s   kube-proxy       
	  Normal   Starting                 13m    kube-proxy       
	  Normal   Starting                 13m    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    13m    kubelet          Node ha-150891 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m    kubelet          Node ha-150891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m    kubelet          Node ha-150891 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   NodeReady                13m    kubelet          Node ha-150891 status is now: NodeReady
	  Normal   RegisteredNode           12m    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   RegisteredNode           10m    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Warning  ContainerGCFailed        3m34s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           107s   node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   RegisteredNode           95s    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   RegisteredNode           32s    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	
	
	Name:               ha-150891-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_42_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:54:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-150891-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1608b7369bb468b8c8c5013f81b09bb
	  System UUID:                c1608b73-69bb-468b-8c8c-5013f81b09bb
	  Boot ID:                    1b6fd4e8-5623-4950-8060-fcbc7d176ce8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cwsjc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-150891-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-bz2j7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-150891-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-150891-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nmkp9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-150891-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-150891-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 94s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-150891-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-150891-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-150891-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  NodeNotReady             8m53s                  node-controller  Node ha-150891-m02 status is now: NodeNotReady
	  Normal  Starting                 2m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node ha-150891-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m17s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           107s                   node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           95s                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	
	
	Name:               ha-150891-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_43_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:43:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:54:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:54:25 +0000   Wed, 31 Jul 2024 22:53:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:54:25 +0000   Wed, 31 Jul 2024 22:53:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:54:25 +0000   Wed, 31 Jul 2024 22:53:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:54:25 +0000   Wed, 31 Jul 2024 22:53:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-150891-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 55f48101720847269fc5703e686b1c56
	  System UUID:                55f48101-7208-4726-9fc5-703e686b1c56
	  Boot ID:                    2bd3d82d-4668-43ae-b3f4-b0904a4ad5d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gzb99                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-150891-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-8bkwq                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-150891-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-150891-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-df4cg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-150891-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-150891-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 44s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-150891-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal   RegisteredNode           107s               node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	  Normal   NodeNotReady             67s                node-controller  Node ha-150891-m03 status is now: NodeNotReady
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  62s (x3 over 62s)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x3 over 62s)  kubelet          Node ha-150891-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x3 over 62s)  kubelet          Node ha-150891-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 62s (x2 over 62s)  kubelet          Node ha-150891-m03 has been rebooted, boot id: 2bd3d82d-4668-43ae-b3f4-b0904a4ad5d5
	  Normal   NodeReady                62s (x2 over 62s)  kubelet          Node ha-150891-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-150891-m03 event: Registered Node ha-150891-m03 in Controller
	
	
	Name:               ha-150891-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_44_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:44:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:54:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:54:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:54:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:54:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-150891-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdcf2d763364b5cbf54f471f1e49c03
	  System UUID:                7bdcf2d7-6336-4b5c-bf54-f471f1e49c03
	  Boot ID:                    97571a69-365f-4aec-b624-9c75ef9066b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4ghcd       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-l8srs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-150891-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   NodeReady                9m52s              kubelet          Node ha-150891-m04 status is now: NodeReady
	  Normal   RegisteredNode           107s               node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   NodeNotReady             67s                node-controller  Node ha-150891-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-150891-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-150891-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-150891-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-150891-m04 has been rebooted, boot id: 97571a69-365f-4aec-b624-9c75ef9066b7
	  Normal   NodeReady                8s                 kubelet          Node ha-150891-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 22:41] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.059402] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055698] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.187489] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.128918] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.269933] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.169130] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.879571] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.061597] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.693408] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.081387] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.056574] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.292402] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 22:42] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 22:52] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +0.146321] systemd-fstab-generator[3651]: Ignoring "noauto" option for root device
	[  +0.193378] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[  +0.165845] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	[  +0.297616] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.804829] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +4.747343] kauditd_printk_skb: 122 callbacks suppressed
	[  +7.365479] kauditd_printk_skb: 85 callbacks suppressed
	[Jul31 22:53] kauditd_printk_skb: 11 callbacks suppressed
	[ +12.058061] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8] <==
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T22:50:45.388499Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.105:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T22:50:45.388629Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.105:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T22:50:45.388793Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"38dbae10e7efb596","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T22:50:45.388967Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389003Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.38904Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389086Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389153Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389209Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389239Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389263Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389291Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389324Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389397Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389497Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389525Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.392892Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.105:2380"}
	{"level":"info","ts":"2024-07-31T22:50:45.393077Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.105:2380"}
	{"level":"info","ts":"2024-07-31T22:50:45.393111Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-150891","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.105:2380"],"advertise-client-urls":["https://192.168.39.105:2379"]}
	
	
	==> etcd [533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248] <==
	{"level":"warn","ts":"2024-07-31T22:53:53.231411Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:53:53.231505Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:53:56.789347Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2decda6e654e6303","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:53:56.801373Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2decda6e654e6303","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:53:57.234064Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:53:57.234124Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:01.236663Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:01.236774Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:01.790185Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2decda6e654e6303","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:01.802463Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2decda6e654e6303","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:05.239272Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:05.239392Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2decda6e654e6303","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-31T22:54:06.102656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.660086ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13084805243779593178 > lease_revoke:<id:3596910afea6b360>","response":"size:28"}
	{"level":"info","ts":"2024-07-31T22:54:06.102842Z","caller":"traceutil/trace.go:171","msg":"trace[1298668986] linearizableReadLoop","detail":"{readStateIndex:2773; appliedIndex:2772; }","duration":"208.024246ms","start":"2024-07-31T22:54:05.894802Z","end":"2024-07-31T22:54:06.102826Z","steps":["trace[1298668986] 'read index received'  (duration: 1.085833ms)","trace[1298668986] 'applied index is now lower than readState.Index'  (duration: 206.937177ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T22:54:06.103041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.211998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-150891-m03\" ","response":"range_response_count:1 size:5678"}
	{"level":"info","ts":"2024-07-31T22:54:06.103087Z","caller":"traceutil/trace.go:171","msg":"trace[524736862] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-150891-m03; range_end:; response_count:1; response_revision:2388; }","duration":"208.289577ms","start":"2024-07-31T22:54:05.894789Z","end":"2024-07-31T22:54:06.103078Z","steps":["trace[524736862] 'agreement among raft nodes before linearized reading'  (duration: 208.135596ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T22:54:06.105075Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.159342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:438"}
	{"level":"info","ts":"2024-07-31T22:54:06.105144Z","caller":"traceutil/trace.go:171","msg":"trace[97909831] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2388; }","duration":"127.264205ms","start":"2024-07-31T22:54:05.977868Z","end":"2024-07-31T22:54:06.105132Z","steps":["trace[97909831] 'agreement among raft nodes before linearized reading'  (duration: 126.754611ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T22:54:06.602037Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.602096Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.60236Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.628514Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38dbae10e7efb596","to":"2decda6e654e6303","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T22:54:06.628567Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.633775Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38dbae10e7efb596","to":"2decda6e654e6303","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T22:54:06.633806Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	
	
	==> kernel <==
	 22:54:57 up 14 min,  0 users,  load average: 0.28, 0.27, 0.19
	Linux ha-150891 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b] <==
	I0731 22:54:26.974306       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:54:36.979370       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:54:36.979428       1 main.go:299] handling current node
	I0731 22:54:36.979443       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:54:36.979448       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:54:36.979594       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:54:36.979606       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:54:36.979675       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:54:36.979776       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:54:46.976167       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:54:46.976275       1 main.go:299] handling current node
	I0731 22:54:46.976305       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:54:46.976323       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:54:46.976466       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:54:46.976530       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:54:46.976632       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:54:46.976669       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:54:56.973042       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:54:56.973102       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:54:56.973217       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:54:56.973240       1 main.go:299] handling current node
	I0731 22:54:56.973252       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:54:56.973261       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:54:56.973314       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:54:56.973330       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f] <==
	I0731 22:50:11.241105       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:50:21.242171       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:50:21.242216       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:50:21.242357       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:50:21.242378       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:50:21.242428       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:50:21.242445       1 main.go:299] handling current node
	I0731 22:50:21.242456       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:50:21.242461       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:50:31.239738       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:50:31.239781       1 main.go:299] handling current node
	I0731 22:50:31.239800       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:50:31.239808       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:50:31.239963       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:50:31.239995       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:50:31.240078       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:50:31.240086       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:50:41.240255       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:50:41.240301       1 main.go:299] handling current node
	I0731 22:50:41.240319       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:50:41.240324       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:50:41.240458       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:50:41.240482       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:50:41.240567       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:50:41.240586       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927] <==
	I0731 22:52:26.562344       1 options.go:221] external host was not specified, using 192.168.39.105
	I0731 22:52:26.567357       1 server.go:148] Version: v1.30.3
	I0731 22:52:26.567441       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:52:26.975809       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 22:52:26.981784       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 22:52:26.988214       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 22:52:26.988249       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 22:52:26.988431       1 instance.go:299] Using reconciler: lease
	W0731 22:52:46.967384       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0731 22:52:46.970894       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0731 22:52:46.991391       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f] <==
	I0731 22:53:09.089976       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0731 22:53:09.090052       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 22:53:09.090245       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 22:53:09.140098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 22:53:09.140133       1 policy_source.go:224] refreshing policies
	I0731 22:53:09.164241       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 22:53:09.164279       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 22:53:09.165643       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 22:53:09.165827       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 22:53:09.170082       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 22:53:09.171589       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 22:53:09.186764       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 22:53:09.194201       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 22:53:09.195233       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 22:53:09.195334       1 aggregator.go:165] initial CRD sync complete...
	I0731 22:53:09.195355       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 22:53:09.195361       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 22:53:09.195367       1 cache.go:39] Caches are synced for autoregister controller
	W0731 22:53:09.224167       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.224 192.168.39.241]
	I0731 22:53:09.225882       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 22:53:09.228985       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 22:53:09.242359       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 22:53:09.247776       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 22:53:10.077905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 22:53:10.565652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.224 192.168.39.241]
	
	
	==> kube-controller-manager [3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e] <==
	I0731 22:53:22.182153       1 shared_informer.go:320] Caches are synced for taint
	I0731 22:53:22.182413       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0731 22:53:22.182512       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891"
	I0731 22:53:22.182560       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m02"
	I0731 22:53:22.182600       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m03"
	I0731 22:53:22.182635       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-150891-m04"
	I0731 22:53:22.182674       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 22:53:22.188168       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 22:53:22.215294       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 22:53:22.225318       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 22:53:22.661455       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 22:53:22.710605       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 22:53:22.710644       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 22:53:25.442594       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.969051ms"
	I0731 22:53:25.444020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.618µs"
	I0731 22:53:32.082770       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.762813ms"
	I0731 22:53:32.083880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.929µs"
	I0731 22:53:32.117435       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-nz2q2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-nz2q2\": the object has been modified; please apply your changes to the latest version and try again"
	I0731 22:53:32.117733       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5a1f4abf-1aa2-42ff-b547-deb6b2fe3421", APIVersion:"v1", ResourceVersion:"254", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-nz2q2 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-nz2q2": the object has been modified; please apply your changes to the latest version and try again
	I0731 22:53:51.170091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.470309ms"
	I0731 22:53:51.170386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="105.043µs"
	I0731 22:53:55.945809       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="124.149µs"
	I0731 22:54:10.386659       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.528174ms"
	I0731 22:54:10.386778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.774µs"
	I0731 22:54:49.557254       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-150891-m04"
	
	
	==> kube-controller-manager [a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba] <==
	I0731 22:52:27.543986       1 serving.go:380] Generated self-signed cert in-memory
	I0731 22:52:27.767213       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 22:52:27.767253       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:52:27.768640       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 22:52:27.768837       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 22:52:27.768916       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 22:52:27.769101       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0731 22:52:48.000056       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.105:8443/healthz\": dial tcp 192.168.39.105:8443: connect: connection refused"
	
	
	==> kube-proxy [45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526] <==
	E0731 22:49:30.810541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:30.810151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:30.811167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:37.338203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:37.338271       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:37.338343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:37.338393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:37.338215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:37.338502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:49.626684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:49.626876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:49.628307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:49.628401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:49.628350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:49.628532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:11.130207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:11.130275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:14.202489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:14.202556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:14.202621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:14.202680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:38.779684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:38.779861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:41.851299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:41.851413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d] <==
	I0731 22:52:27.489068       1 server_linux.go:69] "Using iptables proxy"
	E0731 22:52:29.372137       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:32.442751       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:35.514237       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:41.659197       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:50.875078       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0731 22:53:09.151051       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.105"]
	I0731 22:53:09.281040       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 22:53:09.281127       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 22:53:09.281146       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:53:09.284485       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:53:09.284648       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:53:09.284675       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:53:09.285893       1 config.go:192] "Starting service config controller"
	I0731 22:53:09.285924       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:53:09.285948       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:53:09.285951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:53:09.286589       1 config.go:319] "Starting node config controller"
	I0731 22:53:09.286615       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:53:09.386909       1 shared_informer.go:320] Caches are synced for service config
	I0731 22:53:09.386911       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:53:09.387164       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78] <==
	W0731 22:50:40.804554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:40.804666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:40.928617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:50:40.928784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 22:50:41.043213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:41.043329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:41.247266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:50:41.247367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:50:43.104616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:50:43.104666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:50:43.472849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:50:43.472893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:50:43.931196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 22:50:43.931370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 22:50:44.105797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:50:44.105840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 22:50:44.170514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:44.170567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:44.340896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:50:44.340946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:50:44.409462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:44.409514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:45.101037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:45.101079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:45.312390       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52] <==
	W0731 22:53:03.717470       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.105:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:03.717541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.105:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:03.810792       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.105:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:03.810905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.105:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:05.598324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.105:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:05.598399       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.105:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:05.775244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.105:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:05.775453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.105:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:05.887533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.105:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:05.887678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.105:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.047727       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.105:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.047873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.105:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.206860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.105:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.206981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.105:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.252652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.105:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.252872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.105:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.298794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.105:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.298926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.105:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:07.050678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.105:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:07.050760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.105:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:09.096287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:53:09.096336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:53:09.129294       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 22:53:09.129340       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 22:53:23.505595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 22:53:03 ha-150891 kubelet[1359]: E0731 22:53:03.162298    1359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1836": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 31 22:53:04 ha-150891 kubelet[1359]: I0731 22:53:04.299770    1359 scope.go:117] "RemoveContainer" containerID="76732b8c6164f6f5228a6de41218e29bfcfca1272d678bef315abb7228292402"
	Jul 31 22:53:04 ha-150891 kubelet[1359]: I0731 22:53:04.300801    1359 scope.go:117] "RemoveContainer" containerID="8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3"
	Jul 31 22:53:04 ha-150891 kubelet[1359]: E0731 22:53:04.300998    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c482636f-76e6-4ea7-9a14-3e9d6a7a4308)\"" pod="kube-system/storage-provisioner" podUID="c482636f-76e6-4ea7-9a14-3e9d6a7a4308"
	Jul 31 22:53:06 ha-150891 kubelet[1359]: I0731 22:53:06.992950    1359 scope.go:117] "RemoveContainer" containerID="3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927"
	Jul 31 22:53:18 ha-150891 kubelet[1359]: I0731 22:53:18.992681    1359 scope.go:117] "RemoveContainer" containerID="8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3"
	Jul 31 22:53:18 ha-150891 kubelet[1359]: E0731 22:53:18.992928    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c482636f-76e6-4ea7-9a14-3e9d6a7a4308)\"" pod="kube-system/storage-provisioner" podUID="c482636f-76e6-4ea7-9a14-3e9d6a7a4308"
	Jul 31 22:53:23 ha-150891 kubelet[1359]: E0731 22:53:23.030984    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:53:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:53:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:53:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:53:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:53:23 ha-150891 kubelet[1359]: I0731 22:53:23.081259    1359 scope.go:117] "RemoveContainer" containerID="aa312cf0b6219dcc4a642e96d32d4947f9a59f82178020e3ce208a74292c12c5"
	Jul 31 22:53:33 ha-150891 kubelet[1359]: I0731 22:53:33.711410    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-98526" podStartSLOduration=562.145650637 podStartE2EDuration="9m23.711388814s" podCreationTimestamp="2024-07-31 22:44:10 +0000 UTC" firstStartedPulling="2024-07-31 22:44:12.632565712 +0000 UTC m=+169.788904761" lastFinishedPulling="2024-07-31 22:44:14.198303898 +0000 UTC m=+171.354642938" observedRunningTime="2024-07-31 22:44:14.687200893 +0000 UTC m=+171.843539992" watchObservedRunningTime="2024-07-31 22:53:33.711388814 +0000 UTC m=+730.867727870"
	Jul 31 22:53:33 ha-150891 kubelet[1359]: I0731 22:53:33.992135    1359 scope.go:117] "RemoveContainer" containerID="8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3"
	Jul 31 22:53:33 ha-150891 kubelet[1359]: E0731 22:53:33.992407    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c482636f-76e6-4ea7-9a14-3e9d6a7a4308)\"" pod="kube-system/storage-provisioner" podUID="c482636f-76e6-4ea7-9a14-3e9d6a7a4308"
	Jul 31 22:53:47 ha-150891 kubelet[1359]: I0731 22:53:47.992758    1359 scope.go:117] "RemoveContainer" containerID="8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3"
	Jul 31 22:53:48 ha-150891 kubelet[1359]: I0731 22:53:48.992777    1359 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-150891" podUID="1b703a99-faf3-4c2d-a871-0fb6bce0b917"
	Jul 31 22:53:49 ha-150891 kubelet[1359]: I0731 22:53:49.022145    1359 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-150891"
	Jul 31 22:53:53 ha-150891 kubelet[1359]: I0731 22:53:53.012240    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-150891" podStartSLOduration=4.012222715 podStartE2EDuration="4.012222715s" podCreationTimestamp="2024-07-31 22:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 22:53:53.011393497 +0000 UTC m=+750.167732552" watchObservedRunningTime="2024-07-31 22:53:53.012222715 +0000 UTC m=+750.168561769"
	Jul 31 22:54:23 ha-150891 kubelet[1359]: E0731 22:54:23.026212    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:54:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:54:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:54:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:54:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 22:54:56.702732 1201919 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1172186/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-150891 -n ha-150891
helpers_test.go:261: (dbg) Run:  kubectl --context ha-150891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (376.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 stop -v=7 --alsologtostderr: exit status 82 (2m0.496315331s)

                                                
                                                
-- stdout --
	* Stopping node "ha-150891-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:55:16.403413 1202334 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:55:16.403575 1202334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:55:16.403586 1202334 out.go:304] Setting ErrFile to fd 2...
	I0731 22:55:16.403592 1202334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:55:16.403906 1202334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:55:16.404211 1202334 out.go:298] Setting JSON to false
	I0731 22:55:16.404335 1202334 mustload.go:65] Loading cluster: ha-150891
	I0731 22:55:16.404729 1202334 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:55:16.404819 1202334 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:55:16.405001 1202334 mustload.go:65] Loading cluster: ha-150891
	I0731 22:55:16.405170 1202334 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:55:16.405203 1202334 stop.go:39] StopHost: ha-150891-m04
	I0731 22:55:16.405583 1202334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:55:16.405644 1202334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:55:16.421343 1202334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43109
	I0731 22:55:16.421914 1202334 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:55:16.422560 1202334 main.go:141] libmachine: Using API Version  1
	I0731 22:55:16.422587 1202334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:55:16.422932 1202334 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:55:16.425371 1202334 out.go:177] * Stopping node "ha-150891-m04"  ...
	I0731 22:55:16.426882 1202334 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0731 22:55:16.426924 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:55:16.427266 1202334 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0731 22:55:16.427301 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:55:16.430386 1202334 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:55:16.430932 1202334 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:54:44 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:55:16.430965 1202334 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:55:16.431156 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:55:16.431379 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:55:16.431542 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:55:16.431704 1202334 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	I0731 22:55:16.514434 1202334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0731 22:55:16.567610 1202334 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0731 22:55:16.620482 1202334 main.go:141] libmachine: Stopping "ha-150891-m04"...
	I0731 22:55:16.620514 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:55:16.622214 1202334 main.go:141] libmachine: (ha-150891-m04) Calling .Stop
	I0731 22:55:16.625926 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 0/120
	I0731 22:55:17.627512 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 1/120
	I0731 22:55:18.628862 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 2/120
	I0731 22:55:19.630152 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 3/120
	I0731 22:55:20.632721 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 4/120
	I0731 22:55:21.634284 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 5/120
	I0731 22:55:22.635760 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 6/120
	I0731 22:55:23.637203 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 7/120
	I0731 22:55:24.638965 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 8/120
	I0731 22:55:25.640604 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 9/120
	I0731 22:55:26.642935 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 10/120
	I0731 22:55:27.644469 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 11/120
	I0731 22:55:28.646545 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 12/120
	I0731 22:55:29.648042 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 13/120
	I0731 22:55:30.649567 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 14/120
	I0731 22:55:31.651637 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 15/120
	I0731 22:55:32.653152 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 16/120
	I0731 22:55:33.654764 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 17/120
	I0731 22:55:34.656251 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 18/120
	I0731 22:55:35.657707 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 19/120
	I0731 22:55:36.660212 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 20/120
	I0731 22:55:37.661958 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 21/120
	I0731 22:55:38.663497 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 22/120
	I0731 22:55:39.665526 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 23/120
	I0731 22:55:40.666965 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 24/120
	I0731 22:55:41.669283 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 25/120
	I0731 22:55:42.671054 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 26/120
	I0731 22:55:43.672383 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 27/120
	I0731 22:55:44.673825 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 28/120
	I0731 22:55:45.675228 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 29/120
	I0731 22:55:46.677320 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 30/120
	I0731 22:55:47.679518 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 31/120
	I0731 22:55:48.680889 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 32/120
	I0731 22:55:49.682507 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 33/120
	I0731 22:55:50.683911 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 34/120
	I0731 22:55:51.685873 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 35/120
	I0731 22:55:52.687482 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 36/120
	I0731 22:55:53.689085 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 37/120
	I0731 22:55:54.691336 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 38/120
	I0731 22:55:55.692886 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 39/120
	I0731 22:55:56.695323 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 40/120
	I0731 22:55:57.696931 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 41/120
	I0731 22:55:58.698210 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 42/120
	I0731 22:55:59.699688 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 43/120
	I0731 22:56:00.702359 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 44/120
	I0731 22:56:01.703908 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 45/120
	I0731 22:56:02.705429 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 46/120
	I0731 22:56:03.707735 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 47/120
	I0731 22:56:04.709185 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 48/120
	I0731 22:56:05.710484 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 49/120
	I0731 22:56:06.712829 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 50/120
	I0731 22:56:07.715187 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 51/120
	I0731 22:56:08.716671 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 52/120
	I0731 22:56:09.718038 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 53/120
	I0731 22:56:10.719552 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 54/120
	I0731 22:56:11.721758 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 55/120
	I0731 22:56:12.723186 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 56/120
	I0731 22:56:13.724867 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 57/120
	I0731 22:56:14.726335 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 58/120
	I0731 22:56:15.727872 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 59/120
	I0731 22:56:16.730126 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 60/120
	I0731 22:56:17.731914 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 61/120
	I0731 22:56:18.733524 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 62/120
	I0731 22:56:19.735203 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 63/120
	I0731 22:56:20.737332 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 64/120
	I0731 22:56:21.739551 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 65/120
	I0731 22:56:22.741056 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 66/120
	I0731 22:56:23.742529 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 67/120
	I0731 22:56:24.744562 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 68/120
	I0731 22:56:25.746714 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 69/120
	I0731 22:56:26.748748 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 70/120
	I0731 22:56:27.750110 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 71/120
	I0731 22:56:28.751330 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 72/120
	I0731 22:56:29.752781 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 73/120
	I0731 22:56:30.754021 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 74/120
	I0731 22:56:31.756130 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 75/120
	I0731 22:56:32.757563 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 76/120
	I0731 22:56:33.758844 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 77/120
	I0731 22:56:34.760138 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 78/120
	I0731 22:56:35.761770 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 79/120
	I0731 22:56:36.763174 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 80/120
	I0731 22:56:37.764712 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 81/120
	I0731 22:56:38.766837 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 82/120
	I0731 22:56:39.768371 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 83/120
	I0731 22:56:40.770665 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 84/120
	I0731 22:56:41.772876 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 85/120
	I0731 22:56:42.775301 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 86/120
	I0731 22:56:43.776768 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 87/120
	I0731 22:56:44.778494 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 88/120
	I0731 22:56:45.780121 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 89/120
	I0731 22:56:46.782551 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 90/120
	I0731 22:56:47.784401 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 91/120
	I0731 22:56:48.785637 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 92/120
	I0731 22:56:49.787072 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 93/120
	I0731 22:56:50.788497 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 94/120
	I0731 22:56:51.790393 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 95/120
	I0731 22:56:52.792172 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 96/120
	I0731 22:56:53.793480 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 97/120
	I0731 22:56:54.795464 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 98/120
	I0731 22:56:55.797016 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 99/120
	I0731 22:56:56.799517 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 100/120
	I0731 22:56:57.801103 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 101/120
	I0731 22:56:58.802655 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 102/120
	I0731 22:56:59.804437 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 103/120
	I0731 22:57:00.806637 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 104/120
	I0731 22:57:01.808840 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 105/120
	I0731 22:57:02.810719 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 106/120
	I0731 22:57:03.812335 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 107/120
	I0731 22:57:04.814983 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 108/120
	I0731 22:57:05.816662 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 109/120
	I0731 22:57:06.818908 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 110/120
	I0731 22:57:07.820977 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 111/120
	I0731 22:57:08.822795 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 112/120
	I0731 22:57:09.824615 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 113/120
	I0731 22:57:10.826786 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 114/120
	I0731 22:57:11.829111 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 115/120
	I0731 22:57:12.830940 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 116/120
	I0731 22:57:13.832586 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 117/120
	I0731 22:57:14.834830 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 118/120
	I0731 22:57:15.836600 1202334 main.go:141] libmachine: (ha-150891-m04) Waiting for machine to stop 119/120
	I0731 22:57:16.837961 1202334 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0731 22:57:16.838035 1202334 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0731 22:57:16.840266 1202334 out.go:177] 
	W0731 22:57:16.842120 1202334 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0731 22:57:16.842146 1202334 out.go:239] * 
	* 
	W0731 22:57:16.846832 1202334 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 22:57:16.848632 1202334 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-150891 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr: exit status 3 (19.085225625s)

                                                
                                                
-- stdout --
	ha-150891
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-150891-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:57:16.902752 1202752 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:57:16.903053 1202752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:57:16.903066 1202752 out.go:304] Setting ErrFile to fd 2...
	I0731 22:57:16.903080 1202752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:57:16.903326 1202752 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:57:16.903564 1202752 out.go:298] Setting JSON to false
	I0731 22:57:16.903600 1202752 mustload.go:65] Loading cluster: ha-150891
	I0731 22:57:16.903677 1202752 notify.go:220] Checking for updates...
	I0731 22:57:16.904214 1202752 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:57:16.904242 1202752 status.go:255] checking status of ha-150891 ...
	I0731 22:57:16.904725 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:16.904813 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:16.926141 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34407
	I0731 22:57:16.926703 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:16.927533 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:16.927558 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:16.928145 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:16.928443 1202752 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:57:16.930507 1202752 status.go:330] ha-150891 host status = "Running" (err=<nil>)
	I0731 22:57:16.930539 1202752 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:57:16.930890 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:16.930949 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:16.947811 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I0731 22:57:16.948385 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:16.949005 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:16.949044 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:16.949517 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:16.949727 1202752 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:57:16.953363 1202752 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:57:16.953822 1202752 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:57:16.953855 1202752 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:57:16.954147 1202752 host.go:66] Checking if "ha-150891" exists ...
	I0731 22:57:16.954609 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:16.954669 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:16.970627 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0731 22:57:16.971095 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:16.971754 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:16.971785 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:16.972156 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:16.972375 1202752 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:57:16.972629 1202752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:57:16.972674 1202752 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:57:16.975862 1202752 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:57:16.976479 1202752 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:57:16.976513 1202752 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:57:16.976733 1202752 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:57:16.976987 1202752 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:57:16.977184 1202752 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:57:16.977375 1202752 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:57:17.065593 1202752 ssh_runner.go:195] Run: systemctl --version
	I0731 22:57:17.073292 1202752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:57:17.089624 1202752 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:57:17.089678 1202752 api_server.go:166] Checking apiserver status ...
	I0731 22:57:17.089739 1202752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:57:17.106396 1202752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5011/cgroup
	W0731 22:57:17.117066 1202752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5011/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:57:17.117129 1202752 ssh_runner.go:195] Run: ls
	I0731 22:57:17.122072 1202752 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:57:17.128780 1202752 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:57:17.128815 1202752 status.go:422] ha-150891 apiserver status = Running (err=<nil>)
	I0731 22:57:17.128827 1202752 status.go:257] ha-150891 status: &{Name:ha-150891 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:57:17.128850 1202752 status.go:255] checking status of ha-150891-m02 ...
	I0731 22:57:17.129172 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:17.129197 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:17.144800 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40993
	I0731 22:57:17.145278 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:17.145854 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:17.145878 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:17.146286 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:17.146512 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .GetState
	I0731 22:57:17.148349 1202752 status.go:330] ha-150891-m02 host status = "Running" (err=<nil>)
	I0731 22:57:17.148374 1202752 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:57:17.148716 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:17.148743 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:17.164049 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45111
	I0731 22:57:17.164602 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:17.165157 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:17.165185 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:17.165560 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:17.165759 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .GetIP
	I0731 22:57:17.168953 1202752 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:57:17.169476 1202752 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:52:29 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:57:17.169512 1202752 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:57:17.169711 1202752 host.go:66] Checking if "ha-150891-m02" exists ...
	I0731 22:57:17.170006 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:17.170050 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:17.185805 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0731 22:57:17.186284 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:17.186772 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:17.186795 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:17.187184 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:17.187419 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .DriverName
	I0731 22:57:17.187696 1202752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:57:17.187727 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHHostname
	I0731 22:57:17.191175 1202752 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:57:17.191793 1202752 main.go:141] libmachine: (ha-150891-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:a1:dd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:52:29 +0000 UTC Type:0 Mac:52:54:00:60:a1:dd Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-150891-m02 Clientid:01:52:54:00:60:a1:dd}
	I0731 22:57:17.191827 1202752 main.go:141] libmachine: (ha-150891-m02) DBG | domain ha-150891-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:60:a1:dd in network mk-ha-150891
	I0731 22:57:17.191915 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHPort
	I0731 22:57:17.192187 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHKeyPath
	I0731 22:57:17.192375 1202752 main.go:141] libmachine: (ha-150891-m02) Calling .GetSSHUsername
	I0731 22:57:17.192614 1202752 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m02/id_rsa Username:docker}
	I0731 22:57:17.276300 1202752 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 22:57:17.294392 1202752 kubeconfig.go:125] found "ha-150891" server: "https://192.168.39.254:8443"
	I0731 22:57:17.294430 1202752 api_server.go:166] Checking apiserver status ...
	I0731 22:57:17.294478 1202752 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 22:57:17.311320 1202752 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup
	W0731 22:57:17.324527 1202752 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1374/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 22:57:17.324589 1202752 ssh_runner.go:195] Run: ls
	I0731 22:57:17.329297 1202752 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0731 22:57:17.333816 1202752 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0731 22:57:17.333850 1202752 status.go:422] ha-150891-m02 apiserver status = Running (err=<nil>)
	I0731 22:57:17.333859 1202752 status.go:257] ha-150891-m02 status: &{Name:ha-150891-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 22:57:17.333877 1202752 status.go:255] checking status of ha-150891-m04 ...
	I0731 22:57:17.334278 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:17.334313 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:17.351246 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
	I0731 22:57:17.351737 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:17.352245 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:17.352271 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:17.352675 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:17.352891 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .GetState
	I0731 22:57:17.354690 1202752 status.go:330] ha-150891-m04 host status = "Running" (err=<nil>)
	I0731 22:57:17.354710 1202752 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:57:17.355003 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:17.355032 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:17.371115 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42337
	I0731 22:57:17.371562 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:17.372024 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:17.372044 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:17.372406 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:17.372635 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .GetIP
	I0731 22:57:17.375560 1202752 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:57:17.376050 1202752 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:54:44 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:57:17.376080 1202752 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:57:17.376286 1202752 host.go:66] Checking if "ha-150891-m04" exists ...
	I0731 22:57:17.376625 1202752 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:57:17.376683 1202752 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:57:17.392761 1202752 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44797
	I0731 22:57:17.393273 1202752 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:57:17.393805 1202752 main.go:141] libmachine: Using API Version  1
	I0731 22:57:17.393830 1202752 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:57:17.394134 1202752 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:57:17.394261 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .DriverName
	I0731 22:57:17.394399 1202752 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 22:57:17.394424 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHHostname
	I0731 22:57:17.397296 1202752 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:57:17.397649 1202752 main.go:141] libmachine: (ha-150891-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:bc:bd", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:54:44 +0000 UTC Type:0 Mac:52:54:00:af:bc:bd Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:ha-150891-m04 Clientid:01:52:54:00:af:bc:bd}
	I0731 22:57:17.397673 1202752 main.go:141] libmachine: (ha-150891-m04) DBG | domain ha-150891-m04 has defined IP address 192.168.39.120 and MAC address 52:54:00:af:bc:bd in network mk-ha-150891
	I0731 22:57:17.397927 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHPort
	I0731 22:57:17.398122 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHKeyPath
	I0731 22:57:17.398353 1202752 main.go:141] libmachine: (ha-150891-m04) Calling .GetSSHUsername
	I0731 22:57:17.398519 1202752 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891-m04/id_rsa Username:docker}
	W0731 22:57:35.936333 1202752 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.120:22: connect: no route to host
	W0731 22:57:35.936443 1202752 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	E0731 22:57:35.936466 1202752 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host
	I0731 22:57:35.936474 1202752 status.go:257] ha-150891-m04 status: &{Name:ha-150891-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0731 22:57:35.936504 1202752 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.120:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-150891 -n ha-150891
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-150891 logs -n 25: (1.623405643s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m04 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp testdata/cp-test.txt                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891:/home/docker/cp-test_ha-150891-m04_ha-150891.txt                       |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891 sudo cat                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891.txt                                 |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m02:/home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m02 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m03:/home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n                                                                 | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | ha-150891-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-150891 ssh -n ha-150891-m03 sudo cat                                          | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC | 31 Jul 24 22:45 UTC |
	|         | /home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-150891 node stop m02 -v=7                                                     | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-150891 node start m02 -v=7                                                    | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-150891 -v=7                                                           | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-150891 -v=7                                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-150891 --wait=true -v=7                                                    | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:50 UTC | 31 Jul 24 22:54 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-150891                                                                | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:54 UTC |                     |
	| node    | ha-150891 node delete m03 -v=7                                                   | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:54 UTC | 31 Jul 24 22:55 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-150891 stop -v=7                                                              | ha-150891 | jenkins | v1.33.1 | 31 Jul 24 22:55 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 22:50:44
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 22:50:44.217335 1200572 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:50:44.217474 1200572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:50:44.217486 1200572 out.go:304] Setting ErrFile to fd 2...
	I0731 22:50:44.217492 1200572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:50:44.217728 1200572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:50:44.218344 1200572 out.go:298] Setting JSON to false
	I0731 22:50:44.219468 1200572 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":23595,"bootTime":1722442649,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 22:50:44.219551 1200572 start.go:139] virtualization: kvm guest
	I0731 22:50:44.222010 1200572 out.go:177] * [ha-150891] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 22:50:44.223485 1200572 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:50:44.223525 1200572 notify.go:220] Checking for updates...
	I0731 22:50:44.225862 1200572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:50:44.227351 1200572 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:50:44.228772 1200572 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:50:44.230081 1200572 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 22:50:44.231320 1200572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:50:44.232947 1200572 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:50:44.233080 1200572 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:50:44.233554 1200572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:50:44.233618 1200572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:50:44.249691 1200572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0731 22:50:44.250159 1200572 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:50:44.250777 1200572 main.go:141] libmachine: Using API Version  1
	I0731 22:50:44.250797 1200572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:50:44.251178 1200572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:50:44.251395 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:50:44.291027 1200572 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 22:50:44.292159 1200572 start.go:297] selected driver: kvm2
	I0731 22:50:44.292178 1200572 start.go:901] validating driver "kvm2" against &{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:50:44.292361 1200572 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:50:44.292799 1200572 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:50:44.292901 1200572 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 22:50:44.310219 1200572 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 22:50:44.310958 1200572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 22:50:44.311021 1200572 cni.go:84] Creating CNI manager for ""
	I0731 22:50:44.311030 1200572 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 22:50:44.311084 1200572 start.go:340] cluster config:
	{Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-ti
ller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:50:44.311235 1200572 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 22:50:44.314540 1200572 out.go:177] * Starting "ha-150891" primary control-plane node in "ha-150891" cluster
	I0731 22:50:44.315710 1200572 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:50:44.315752 1200572 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 22:50:44.315767 1200572 cache.go:56] Caching tarball of preloaded images
	I0731 22:50:44.315873 1200572 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 22:50:44.315887 1200572 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 22:50:44.316034 1200572 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/config.json ...
	I0731 22:50:44.316292 1200572 start.go:360] acquireMachinesLock for ha-150891: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 22:50:44.316346 1200572 start.go:364] duration metric: took 29.338µs to acquireMachinesLock for "ha-150891"
	I0731 22:50:44.316372 1200572 start.go:96] Skipping create...Using existing machine configuration
	I0731 22:50:44.316381 1200572 fix.go:54] fixHost starting: 
	I0731 22:50:44.316666 1200572 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:50:44.316708 1200572 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:50:44.332415 1200572 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37317
	I0731 22:50:44.332899 1200572 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:50:44.333454 1200572 main.go:141] libmachine: Using API Version  1
	I0731 22:50:44.333480 1200572 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:50:44.333890 1200572 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:50:44.334126 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:50:44.334267 1200572 main.go:141] libmachine: (ha-150891) Calling .GetState
	I0731 22:50:44.335933 1200572 fix.go:112] recreateIfNeeded on ha-150891: state=Running err=<nil>
	W0731 22:50:44.335969 1200572 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 22:50:44.337621 1200572 out.go:177] * Updating the running kvm2 "ha-150891" VM ...
	I0731 22:50:44.338567 1200572 machine.go:94] provisionDockerMachine start ...
	I0731 22:50:44.338594 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:50:44.338912 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.341836 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.342373 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.342405 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.342593 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:44.342817 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.342978 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.343102 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:44.343288 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:44.343560 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:44.343578 1200572 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 22:50:44.460662 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891
	
	I0731 22:50:44.460700 1200572 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:50:44.460965 1200572 buildroot.go:166] provisioning hostname "ha-150891"
	I0731 22:50:44.460997 1200572 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:50:44.461226 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.463952 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.464344 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.464370 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.464552 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:44.464780 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.464915 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.465070 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:44.465210 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:44.465414 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:44.465436 1200572 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-150891 && echo "ha-150891" | sudo tee /etc/hostname
	I0731 22:50:44.595007 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-150891
	
	I0731 22:50:44.595041 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.598178 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.598600 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.598625 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.598818 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:44.599023 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.599221 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:44.599359 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:44.599530 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:44.599704 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:44.599725 1200572 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-150891' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-150891/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-150891' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 22:50:44.713075 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 22:50:44.713106 1200572 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 22:50:44.713152 1200572 buildroot.go:174] setting up certificates
	I0731 22:50:44.713163 1200572 provision.go:84] configureAuth start
	I0731 22:50:44.713175 1200572 main.go:141] libmachine: (ha-150891) Calling .GetMachineName
	I0731 22:50:44.713515 1200572 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:50:44.716296 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.716765 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.716790 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.717002 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:44.719564 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.719960 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:44.719992 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:44.720215 1200572 provision.go:143] copyHostCerts
	I0731 22:50:44.720246 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:50:44.720295 1200572 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 22:50:44.720316 1200572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 22:50:44.720402 1200572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 22:50:44.720497 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:50:44.720523 1200572 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 22:50:44.720530 1200572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 22:50:44.720569 1200572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 22:50:44.720635 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:50:44.720660 1200572 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 22:50:44.720669 1200572 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 22:50:44.720698 1200572 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 22:50:44.720771 1200572 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.ha-150891 san=[127.0.0.1 192.168.39.105 ha-150891 localhost minikube]
	I0731 22:50:45.005447 1200572 provision.go:177] copyRemoteCerts
	I0731 22:50:45.005523 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 22:50:45.005557 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:45.008626 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.009052 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:45.009084 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.009291 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:45.009582 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:45.009762 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:45.009918 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:50:45.098405 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 22:50:45.098501 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 22:50:45.129746 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 22:50:45.129838 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0731 22:50:45.157306 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 22:50:45.157401 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 22:50:45.182984 1200572 provision.go:87] duration metric: took 469.803186ms to configureAuth
	I0731 22:50:45.183018 1200572 buildroot.go:189] setting minikube options for container-runtime
	I0731 22:50:45.183243 1200572 config.go:182] Loaded profile config "ha-150891": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:50:45.183320 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:50:45.186016 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.186348 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:50:45.186371 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:50:45.186571 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:50:45.186792 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:45.186965 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:50:45.187160 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:50:45.187337 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:50:45.187574 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:50:45.187596 1200572 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 22:52:15.949999 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 22:52:15.950044 1200572 machine.go:97] duration metric: took 1m31.611461705s to provisionDockerMachine
	I0731 22:52:15.950058 1200572 start.go:293] postStartSetup for "ha-150891" (driver="kvm2")
	I0731 22:52:15.950069 1200572 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 22:52:15.950117 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:15.950537 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 22:52:15.950569 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:15.953909 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:15.954490 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:15.954522 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:15.954811 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:15.955072 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:15.955324 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:15.955521 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:52:16.042723 1200572 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 22:52:16.047266 1200572 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 22:52:16.047302 1200572 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 22:52:16.047380 1200572 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 22:52:16.047461 1200572 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 22:52:16.047473 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 22:52:16.047568 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 22:52:16.057390 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:52:16.082401 1200572 start.go:296] duration metric: took 132.326259ms for postStartSetup
	I0731 22:52:16.082456 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.082808 1200572 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0731 22:52:16.082837 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.086003 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.086442 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.086466 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.086734 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.086958 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.087196 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.087362 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	W0731 22:52:16.174414 1200572 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0731 22:52:16.174454 1200572 fix.go:56] duration metric: took 1m31.858073511s for fixHost
	I0731 22:52:16.174479 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.177315 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.177738 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.177762 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.177982 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.178213 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.178388 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.178510 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.178711 1200572 main.go:141] libmachine: Using SSH client type: native
	I0731 22:52:16.178886 1200572 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.105 22 <nil> <nil>}
	I0731 22:52:16.178897 1200572 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 22:52:16.293066 1200572 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722466336.264134929
	
	I0731 22:52:16.293097 1200572 fix.go:216] guest clock: 1722466336.264134929
	I0731 22:52:16.293107 1200572 fix.go:229] Guest: 2024-07-31 22:52:16.264134929 +0000 UTC Remote: 2024-07-31 22:52:16.174461343 +0000 UTC m=+91.996620433 (delta=89.673586ms)
	I0731 22:52:16.293139 1200572 fix.go:200] guest clock delta is within tolerance: 89.673586ms
	I0731 22:52:16.293147 1200572 start.go:83] releasing machines lock for "ha-150891", held for 1m31.97678769s
	I0731 22:52:16.293174 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.293527 1200572 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:52:16.296331 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.296818 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.296846 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.297082 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.297757 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.297976 1200572 main.go:141] libmachine: (ha-150891) Calling .DriverName
	I0731 22:52:16.298085 1200572 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 22:52:16.298146 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.298231 1200572 ssh_runner.go:195] Run: cat /version.json
	I0731 22:52:16.298259 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHHostname
	I0731 22:52:16.301164 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.301376 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.301549 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.301578 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.301697 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.301874 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:16.301885 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.301903 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:16.302057 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.302133 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHPort
	I0731 22:52:16.302251 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:52:16.302277 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHKeyPath
	I0731 22:52:16.302414 1200572 main.go:141] libmachine: (ha-150891) Calling .GetSSHUsername
	I0731 22:52:16.302580 1200572 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/ha-150891/id_rsa Username:docker}
	I0731 22:52:16.404151 1200572 ssh_runner.go:195] Run: systemctl --version
	I0731 22:52:16.410125 1200572 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 22:52:16.571040 1200572 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 22:52:16.579567 1200572 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 22:52:16.579664 1200572 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 22:52:16.589227 1200572 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 22:52:16.589262 1200572 start.go:495] detecting cgroup driver to use...
	I0731 22:52:16.589364 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 22:52:16.606500 1200572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 22:52:16.620790 1200572 docker.go:217] disabling cri-docker service (if available) ...
	I0731 22:52:16.620883 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 22:52:16.635400 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 22:52:16.650021 1200572 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 22:52:16.800140 1200572 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 22:52:16.961235 1200572 docker.go:233] disabling docker service ...
	I0731 22:52:16.961312 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 22:52:16.981803 1200572 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 22:52:16.996566 1200572 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 22:52:17.150113 1200572 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 22:52:17.316349 1200572 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 22:52:17.330875 1200572 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 22:52:17.349762 1200572 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 22:52:17.349831 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.360749 1200572 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 22:52:17.360831 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.371473 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.382561 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.393503 1200572 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 22:52:17.404740 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.415972 1200572 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.427390 1200572 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 22:52:17.438419 1200572 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 22:52:17.448575 1200572 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 22:52:17.459000 1200572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:52:17.606614 1200572 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 22:52:17.894404 1200572 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 22:52:17.894491 1200572 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 22:52:17.899642 1200572 start.go:563] Will wait 60s for crictl version
	I0731 22:52:17.899710 1200572 ssh_runner.go:195] Run: which crictl
	I0731 22:52:17.903650 1200572 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 22:52:17.937148 1200572 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 22:52:17.937237 1200572 ssh_runner.go:195] Run: crio --version
	I0731 22:52:17.965264 1200572 ssh_runner.go:195] Run: crio --version
	I0731 22:52:17.997598 1200572 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 22:52:17.999122 1200572 main.go:141] libmachine: (ha-150891) Calling .GetIP
	I0731 22:52:18.002319 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:18.002820 1200572 main.go:141] libmachine: (ha-150891) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5d:5d:f5", ip: ""} in network mk-ha-150891: {Iface:virbr1 ExpiryTime:2024-07-31 23:40:54 +0000 UTC Type:0 Mac:52:54:00:5d:5d:f5 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-150891 Clientid:01:52:54:00:5d:5d:f5}
	I0731 22:52:18.002846 1200572 main.go:141] libmachine: (ha-150891) DBG | domain ha-150891 has defined IP address 192.168.39.105 and MAC address 52:54:00:5d:5d:f5 in network mk-ha-150891
	I0731 22:52:18.003045 1200572 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 22:52:18.008132 1200572 kubeadm.go:883] updating cluster {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fr
eshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 22:52:18.008325 1200572 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 22:52:18.008413 1200572 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:52:18.052964 1200572 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:52:18.053007 1200572 crio.go:433] Images already preloaded, skipping extraction
	I0731 22:52:18.053077 1200572 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 22:52:18.091529 1200572 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 22:52:18.091558 1200572 cache_images.go:84] Images are preloaded, skipping loading
	I0731 22:52:18.091568 1200572 kubeadm.go:934] updating node { 192.168.39.105 8443 v1.30.3 crio true true} ...
	I0731 22:52:18.091680 1200572 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-150891 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 22:52:18.091769 1200572 ssh_runner.go:195] Run: crio config
	I0731 22:52:18.146927 1200572 cni.go:84] Creating CNI manager for ""
	I0731 22:52:18.146949 1200572 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0731 22:52:18.146959 1200572 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 22:52:18.146984 1200572 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.105 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-150891 NodeName:ha-150891 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 22:52:18.147143 1200572 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.105
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-150891"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 22:52:18.147164 1200572 kube-vip.go:115] generating kube-vip config ...
	I0731 22:52:18.147222 1200572 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0731 22:52:18.159526 1200572 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0731 22:52:18.159649 1200572 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0731 22:52:18.159740 1200572 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 22:52:18.170825 1200572 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 22:52:18.170898 1200572 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0731 22:52:18.181163 1200572 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0731 22:52:18.198318 1200572 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 22:52:18.215416 1200572 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0731 22:52:18.232948 1200572 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0731 22:52:18.250767 1200572 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0731 22:52:18.254890 1200572 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 22:52:18.414341 1200572 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 22:52:18.430067 1200572 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891 for IP: 192.168.39.105
	I0731 22:52:18.430091 1200572 certs.go:194] generating shared ca certs ...
	I0731 22:52:18.430109 1200572 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:52:18.430293 1200572 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 22:52:18.430337 1200572 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 22:52:18.430350 1200572 certs.go:256] generating profile certs ...
	I0731 22:52:18.430441 1200572 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/client.key
	I0731 22:52:18.430470 1200572 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97
	I0731 22:52:18.430485 1200572 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.105 192.168.39.224 192.168.39.241 192.168.39.254]
	I0731 22:52:18.561064 1200572 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97 ...
	I0731 22:52:18.561102 1200572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97: {Name:mk2ff593ee3e47083d976067ae0ef73087f1db96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:52:18.561292 1200572 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97 ...
	I0731 22:52:18.561305 1200572 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97: {Name:mk9358e1f80e93c54df5c399710f0e6123bbc559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 22:52:18.561382 1200572 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt.b3ebaa97 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt
	I0731 22:52:18.561550 1200572 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key.b3ebaa97 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key
	I0731 22:52:18.561686 1200572 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key
	I0731 22:52:18.561702 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 22:52:18.561715 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 22:52:18.561728 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 22:52:18.561738 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 22:52:18.561750 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 22:52:18.561764 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 22:52:18.561775 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 22:52:18.561789 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 22:52:18.561835 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 22:52:18.561877 1200572 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 22:52:18.561886 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 22:52:18.561906 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 22:52:18.561927 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 22:52:18.561947 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 22:52:18.561985 1200572 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 22:52:18.562009 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.562024 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.562037 1200572 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.562633 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 22:52:18.588138 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 22:52:18.612880 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 22:52:18.638269 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 22:52:18.663895 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0731 22:52:18.689022 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 22:52:18.713785 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 22:52:18.738914 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/ha-150891/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 22:52:18.763986 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 22:52:18.788897 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 22:52:18.814007 1200572 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 22:52:18.839761 1200572 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 22:52:18.856902 1200572 ssh_runner.go:195] Run: openssl version
	I0731 22:52:18.862812 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 22:52:18.873944 1200572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.878994 1200572 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.879078 1200572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 22:52:18.885217 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 22:52:18.895040 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 22:52:18.906615 1200572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.911720 1200572 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.911800 1200572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 22:52:18.917966 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 22:52:18.928071 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 22:52:18.939371 1200572 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.944154 1200572 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.944239 1200572 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 22:52:18.950023 1200572 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 22:52:18.959669 1200572 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 22:52:18.964562 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 22:52:18.970429 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 22:52:18.976425 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 22:52:18.982244 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 22:52:18.988035 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 22:52:18.993919 1200572 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 22:52:18.999811 1200572 kubeadm.go:392] StartCluster: {Name:ha-150891 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-150891 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.105 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.120 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fresh
pod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:52:18.999949 1200572 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 22:52:19.000020 1200572 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 22:52:19.036133 1200572 cri.go:89] found id: "e32bcdbc931f7a75c1f40f7c3839d94e018c6c9beb067b341eaf6f7f2855661d"
	I0731 22:52:19.036170 1200572 cri.go:89] found id: "09367b7e537fc53bc59177ce2dd80ed599a9b96efdacdc59b8d5043c37b1200c"
	I0731 22:52:19.036177 1200572 cri.go:89] found id: "aa312cf0b6219dcc4a642e96d32d4947f9a59f82178020e3ce208a74292c12c5"
	I0731 22:52:19.036183 1200572 cri.go:89] found id: "6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811"
	I0731 22:52:19.036187 1200572 cri.go:89] found id: "e3efb8efde2a05c2c5ee11cb57e2715c8dbdcdbf679b9c4fe830a41da4707f26"
	I0731 22:52:19.036191 1200572 cri.go:89] found id: "569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2"
	I0731 22:52:19.036196 1200572 cri.go:89] found id: "6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f"
	I0731 22:52:19.036199 1200572 cri.go:89] found id: "45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526"
	I0731 22:52:19.036203 1200572 cri.go:89] found id: "8ab90b2c667e4a162bc2808fd67610192ef721b38e5015a42dd1d8f9d180fc85"
	I0731 22:52:19.036227 1200572 cri.go:89] found id: "8ae0e6eb6658d7fdb8a2a8d777eeb51b8ae2333cbdbd136bba21acafad76b1e5"
	I0731 22:52:19.036245 1200572 cri.go:89] found id: "92f65fc372a62ece1342350ac226c2525fe63b23b4653f1650709b8a8ce71e86"
	I0731 22:52:19.036250 1200572 cri.go:89] found id: "31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8"
	I0731 22:52:19.036258 1200572 cri.go:89] found id: "c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78"
	I0731 22:52:19.036262 1200572 cri.go:89] found id: ""
	I0731 22:52:19.036324 1200572 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.620199067Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac76c510-e056-4eed-b315-54463fbefec0 name=/runtime.v1.RuntimeService/Version
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.621277134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fcee3cd-c457-42d7-9cd8-afe7264c2a08 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.621913987Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466656621888656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fcee3cd-c457-42d7-9cd8-afe7264c2a08 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.622603326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a3e3265-61a2-4de8-bcca-2b24edbcb28a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.622670729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a3e3265-61a2-4de8-bcca-2b24edbcb28a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.623219493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a3e3265-61a2-4de8-bcca-2b24edbcb28a name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.668261784Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6aa4a98f-f175-4496-8518-d90e430e326a name=/runtime.v1.RuntimeService/Version
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.668386103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6aa4a98f-f175-4496-8518-d90e430e326a name=/runtime.v1.RuntimeService/Version
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.669921067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df32358f-e739-4de5-914c-03e43b184d77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.670458211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466656670430596,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df32358f-e739-4de5-914c-03e43b184d77 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.671092658Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd7ac43b-636c-4db6-aa4c-6384264b9558 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.671168972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd7ac43b-636c-4db6-aa4c-6384264b9558 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.671565427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd7ac43b-636c-4db6-aa4c-6384264b9558 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.675676429Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0caa7a4d-be89-45ac-9f31-55e0fb1f9ba7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.676082241Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-98526,Uid:f2b8a59d-2816-4c02-9563-0182ea51e862,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466379289382864,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:44:10.378767779Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-150891,Uid:a78ef189dbee5a2486ddd9b05d358c71,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722466358593104777,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{kubernetes.io/config.hash: a78ef189dbee5a2486ddd9b05d358c71,kubernetes.io/config.seen: 2024-07-31T22:52:18.222904249Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4928n,Uid:258080d9-48d4-4214-a8c2-ccdd229a3a4f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345511437542,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-31T22:41:51.742425457Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkd4j,Uid:b40942b0-bff9-4a49-88a3-d188d5b7dcbe,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345483576773,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:51.732938485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c482636f-76e6-4ea7-9a14-3e9d6a7a4308,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345435566656,Labels:map[string]string
{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/confi
g.seen: 2024-07-31T22:41:51.738557442Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&PodSandboxMetadata{Name:kindnet-4qn8c,Uid:4143fb96-5f2a-4107-807d-29ffbf5a95b8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345405103002,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:36.361888971Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&PodSandboxMetadata{Name:etcd-ha-150891,Uid:ca4fdb575adf0dfd05eecea66937158a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:172
2466345402871500,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.105:2379,kubernetes.io/config.hash: ca4fdb575adf0dfd05eecea66937158a,kubernetes.io/config.seen: 2024-07-31T22:41:22.944216239Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-150891,Uid:44643675366c40417d6e98034ea71e23,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345397686490,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 446436753
66c40417d6e98034ea71e23,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 44643675366c40417d6e98034ea71e23,kubernetes.io/config.seen: 2024-07-31T22:41:22.944218680Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&PodSandboxMetadata{Name:kube-proxy-9xcss,Uid:287c0a26-1f93-4579-a5db-29b604571422,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345381355951,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:36.363268048Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&Po
dSandboxMetadata{Name:kube-apiserver-ha-150891,Uid:4c24d15746700716a70405763791ce13,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345376997882,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.105:8443,kubernetes.io/config.hash: 4c24d15746700716a70405763791ce13,kubernetes.io/config.seen: 2024-07-31T22:41:22.944217701Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-150891,Uid:590a663f1c0e9cab100530ceef20cec7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722466345359571514,Labels:map[string]string{component: kube-scheduler,io.ku
bernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 590a663f1c0e9cab100530ceef20cec7,kubernetes.io/config.seen: 2024-07-31T22:41:22.944210647Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-98526,Uid:f2b8a59d-2816-4c02-9563-0182ea51e862,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465852488582516,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:44:10.378767779Z,kubernetes.io/config.source
: api,},RuntimeHandler:,},&PodSandbox{Id:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-4928n,Uid:258080d9-48d4-4214-a8c2-ccdd229a3a4f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465712051338084,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:51.742425457Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkd4j,Uid:b40942b0-bff9-4a49-88a3-d188d5b7dcbe,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465712040651410,Labels:map[string]string{io.kubernetes.container.name: POD,io.ku
bernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:51.732938485Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&PodSandboxMetadata{Name:kube-proxy-9xcss,Uid:287c0a26-1f93-4579-a5db-29b604571422,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465696683382435,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:36.363268048Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&
PodSandbox{Id:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&PodSandboxMetadata{Name:kindnet-4qn8c,Uid:4143fb96-5f2a-4107-807d-29ffbf5a95b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465696676677118,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T22:41:36.361888971Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&PodSandboxMetadata{Name:etcd-ha-150891,Uid:ca4fdb575adf0dfd05eecea66937158a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465676585869694,Labels:map[string]string{component: etcd,io.kubernetes.container.name:
POD,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.105:2379,kubernetes.io/config.hash: ca4fdb575adf0dfd05eecea66937158a,kubernetes.io/config.seen: 2024-07-31T22:41:16.131604117Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-150891,Uid:590a663f1c0e9cab100530ceef20cec7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722465676581426822,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 590a663f
1c0e9cab100530ceef20cec7,kubernetes.io/config.seen: 2024-07-31T22:41:16.131601793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0caa7a4d-be89-45ac-9f31-55e0fb1f9ba7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.677509676Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02df7c18-6f0a-4112-aeaa-08228d259f55 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.677571888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02df7c18-6f0a-4112-aeaa-08228d259f55 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.679084234Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02df7c18-6f0a-4112-aeaa-08228d259f55 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.722101782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=434af13c-1480-4e66-818d-034adcc4774a name=/runtime.v1.RuntimeService/Version
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.722193901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=434af13c-1480-4e66-818d-034adcc4774a name=/runtime.v1.RuntimeService/Version
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.723521372Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93e3b33f-819a-448b-b506-183eeee0b2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.724055638Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722466656724032808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93e3b33f-819a-448b-b506-183eeee0b2a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.724464752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4dfce2a5-4141-4f33-88db-c7fc5e14a543 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.724520589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4dfce2a5-4141-4f33-88db-c7fc5e14a543 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 22:57:36 ha-150891 crio[3719]: time="2024-07-31 22:57:36.725008590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e523a71817c2281745cea76e1c3eb9d6a34ab71c970be9f279e434b16584212,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722466428005721861,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722466387006465100,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722466382001895181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3,PodSandboxId:3978400018cf7c6d31aa4ae65fda7c2fe78c2ef70bac5fb75cd9d3b55df5f859,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722466381007815423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c482636f-76e6-4ea7-9a14-3e9d6a7a4308,},Annotations:map[string]string{io.kubernetes.container.hash: a04ed9f3,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f0c62213b2094cb3bc6f5ca0ee611bfcf838f220d9eac73b1f247f9306b7b12,PodSandboxId:6c486d06f45f10efcffeaa496724cce5b36a7b76bac2d1364cd435d6d29ee346,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722466379416346806,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annotations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:759aef3785027e51a6ffaf4600501de1c4f172255ba0a56a4a066e48b76815cd,PodSandboxId:b5a9d2dab28855158bb8d7d8a912eb4a3b0c8b8c4b65dd70e27205a54b4eeef5,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722466358688421220,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a78ef189dbee5a2486ddd9b05d358c71,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d,PodSandboxId:eedad7f368c86a11a78608bf4c65b2e704562d184cdff0196197312c0be308a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722466345915403089,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf,PodSandboxId:fa08797866ea77da5dcac572e4faa6cfb6a6c19ade5d505d0067648fa01291b7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466346138487527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kubernetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kub
ernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b,PodSandboxId:fa3085df7464ae26df2bfda86ac8e5b5ea3ee13774ba1a145488ac6e5b2abab5,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722466345970782157,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1,PodSandboxId:01adb669956067e8567526b7ae42a30b92bb8ac20b697066b67ed9e531f12c8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722466345888180447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,
\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248,PodSandboxId:07d745111be0ce07d4ad6a3c6a25ff5e77b3848179d092aa5a8207e952180f2f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722466345742159642,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927,PodSandboxId:0e42ebfba53c19e3ad70a37989cd714ea8e1693dff0aeeee6e06171279e746a9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722466345818498562,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c24d15746700716a70405
763791ce13,},Annotations:map[string]string{io.kubernetes.container.hash: 254bf9c3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba,PodSandboxId:9df1481757b3767c3921f94ed43e185a6a1d4d4817f2aad2387ed21b16c135dc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722466345680568306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44643675366c40417
d6e98034ea71e23,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52,PodSandboxId:b7a5cd34a635aee7a672b63491218083fc9a5bae4ff51c73b6caaf1e6408636e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722466345641850218,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bbba80074e2cf93375d995e2ab4584cd05dbee5f0572b7c90a867e49233e43,PodSandboxId:23ff00497365e7d16a35a3b664d0058fed7c5d05f49ee82a164db4c121c3ba0e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722465854210269279,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-98526,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f2b8a59d-2816-4c02-9563-0182ea51e862,},Annot
ations:map[string]string{io.kubernetes.container.hash: 4ac42638,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811,PodSandboxId:60acb98d735098d63b0b04b6a3c1dc7c901a8da93eacf68bbf624bf38087979e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712295655329,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-4928n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 258080d9-48d4-4214-a8c2-ccdd229a3a4f,},Annotations:map[string]string{io.kube
rnetes.container.hash: 69bc9422,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2,PodSandboxId:911e886f5312d197044a540fa26623269ff26263eabd86b5c0e68ce12526873e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722465712236095444,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-rkd4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40942b0-bff9-4a49-88a3-d188d5b7dcbe,},Annotations:map[string]string{io.kubernetes.container.hash: 3afa8891,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f,PodSandboxId:de805f754594240ebdf7f110aca75705a77e49029ae56beddac2b9a0726068de,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722465700318134680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-4qn8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4143fb96-5f2a-4107-807d-29ffbf5a95b8,},Annotations:map[string]string{io.kubernetes.container.hash: a9dd2e6b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526,PodSandboxId:af4274f85760c1b9bd6fc68d248294bde2c87243da61e61df82cbce268274691,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722465696992304673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9xcss,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 287c0a26-1f93-4579-a5db-29b604571422,},Annotations:map[string]string{io.kubernetes.container.hash: 8243fe2e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78,PodSandboxId:015145f976eb622d4d1475dd7cb68d19be21b6347b08ec369f089b2040795931,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b7
6722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722465676786785746,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 590a663f1c0e9cab100530ceef20cec7,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8,PodSandboxId:148244b8abddec004480e4cbb07fc4217f052427fc9f00451714e3cf964a9589,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c
0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722465676790601629,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-150891,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca4fdb575adf0dfd05eecea66937158a,},Annotations:map[string]string{io.kubernetes.container.hash: 9f19fe60,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4dfce2a5-4141-4f33-88db-c7fc5e14a543 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e523a71817c2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   3978400018cf7       storage-provisioner
	936da742737fd       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   0e42ebfba53c1       kube-apiserver-ha-150891
	3ff53188db11c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   9df1481757b37       kube-controller-manager-ha-150891
	8ba788340850a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   3978400018cf7       storage-provisioner
	2f0c62213b209       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   6c486d06f45f1       busybox-fc5497c4f-98526
	759aef3785027       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   b5a9d2dab2885       kube-vip-ha-150891
	99532a403baeb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   fa08797866ea7       coredns-7db6d8ff4d-4928n
	2138e0d9e3344       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      5 minutes ago       Running             kindnet-cni               1                   fa3085df7464a       kindnet-4qn8c
	b9adf2f762249       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   eedad7f368c86       kube-proxy-9xcss
	12d90c4a99cb0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   01adb66995606       coredns-7db6d8ff4d-rkd4j
	3fc5a6318a06b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   0e42ebfba53c1       kube-apiserver-ha-150891
	533eebe6788d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   07d745111be0c       etcd-ha-150891
	a13dc8c74ad52       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   9df1481757b37       kube-controller-manager-ha-150891
	e5db421039fa4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   b7a5cd34a635a       kube-scheduler-ha-150891
	17bbba80074e2       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   23ff00497365e       busybox-fc5497c4f-98526
	6c2d6faeccb11       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   60acb98d73509       coredns-7db6d8ff4d-4928n
	569d471778fea       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   911e886f5312d       coredns-7db6d8ff4d-rkd4j
	6800ea54157a1       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    15 minutes ago      Exited              kindnet-cni               0                   de805f7545942       kindnet-4qn8c
	45f49431a7774       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      15 minutes ago      Exited              kube-proxy                0                   af4274f85760c       kube-proxy-9xcss
	31a5692b683c3       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   148244b8abdde       etcd-ha-150891
	c5a522e53c2bc       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   015145f976eb6       kube-scheduler-ha-150891
	
	
	==> coredns [12d90c4a99cb0fa58abef1bd6bf5fc8a793d6dc437e6b34cbea0e8dad8fff1b1] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[246625185]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 22:52:35.090) (total time: 10000ms):
	Trace[246625185]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (22:52:45.091)
	Trace[246625185]: [10.000863103s] [10.000863103s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43382->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[197213444]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 22:52:40.937) (total time: 10064ms):
	Trace[197213444]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43382->10.96.0.1:443: read: connection reset by peer 10064ms (22:52:51.002)
	Trace[197213444]: [10.064702475s] [10.064702475s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43382->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [569d471778fea0ef166f94a409d0785b8c9e39e587fad95bbf1a163b2d2681b2] <==
	[INFO] 10.244.1.2:33021 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180449s
	[INFO] 10.244.1.2:54691 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080124s
	[INFO] 10.244.1.2:59380 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104324s
	[INFO] 10.244.2.2:46771 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000088924s
	[INFO] 10.244.2.2:51063 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242769s
	[INFO] 10.244.2.2:49935 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074586s
	[INFO] 10.244.0.4:56290 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010407s
	[INFO] 10.244.0.4:57803 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109451s
	[INFO] 10.244.1.2:53651 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000133423s
	[INFO] 10.244.1.2:54989 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149762s
	[INFO] 10.244.1.2:55181 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079999s
	[INFO] 10.244.1.2:45949 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096277s
	[INFO] 10.244.2.2:38998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160565s
	[INFO] 10.244.2.2:55687 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080958s
	[INFO] 10.244.0.4:36222 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000152278s
	[INFO] 10.244.0.4:55182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000115569s
	[INFO] 10.244.0.4:40749 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099022s
	[INFO] 10.244.1.2:42636 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134944s
	[INFO] 10.244.1.2:45102 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000091957s
	[INFO] 10.244.1.2:39878 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000081213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1886&timeout=9m5s&timeoutSeconds=545&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1886&timeout=5m21s&timeoutSeconds=321&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1883&timeout=8m44s&timeoutSeconds=524&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [6c2d6faeccb1147d3aade56236c9691ff7ff68857a696109e7c5de25f8bae811] <==
	[INFO] 10.244.0.4:44718 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011559s
	[INFO] 10.244.1.2:39166 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153589s
	[INFO] 10.244.1.2:53738 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00171146s
	[INFO] 10.244.1.2:53169 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192547s
	[INFO] 10.244.1.2:46534 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001207677s
	[INFO] 10.244.1.2:40987 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092132s
	[INFO] 10.244.2.2:51004 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179521s
	[INFO] 10.244.2.2:44618 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001670196s
	[INFO] 10.244.2.2:34831 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000094811s
	[INFO] 10.244.2.2:49392 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000285273s
	[INFO] 10.244.2.2:44694 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111378s
	[INFO] 10.244.0.4:58491 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160933s
	[INFO] 10.244.0.4:44490 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000217734s
	[INFO] 10.244.2.2:53960 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106212s
	[INFO] 10.244.2.2:47661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000161869s
	[INFO] 10.244.0.4:43273 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101944s
	[INFO] 10.244.1.2:54182 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187102s
	[INFO] 10.244.2.2:60067 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151741s
	[INFO] 10.244.2.2:49034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160035s
	[INFO] 10.244.2.2:49392 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096218s
	[INFO] 10.244.2.2:59220 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000129048s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1869&timeout=5m52s&timeoutSeconds=352&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1886&timeout=6m0s&timeoutSeconds=360&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [99532a403baebdc6663749afaafe4f1278665f15fd9881cec7372cb3bd7a22cf] <==
	[INFO] plugin/kubernetes: Trace[1710803375]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 22:52:30.801) (total time: 10001ms):
	Trace[1710803375]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (22:52:40.802)
	Trace[1710803375]: [10.001213607s] [10.001213607s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47602->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47602->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47614->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:47614->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-150891
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T22_41_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:57:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:53:07 +0000   Wed, 31 Jul 2024 22:41:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-150891
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a8ca2005fa042d7a84b5199ab2c7a15
	  System UUID:                6a8ca200-5fa0-42d7-a84b-5199ab2c7a15
	  Boot ID:                    2ffe06f6-f7c0-4945-b70b-2276f3221b95
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-98526              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-4928n             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-rkd4j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-150891                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-4qn8c                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-150891             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-150891    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-9xcss                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-150891             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-150891                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m27s  kube-proxy       
	  Normal   Starting                 15m    kube-proxy       
	  Normal   Starting                 16m    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m    kubelet          Node ha-150891 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m    kubelet          Node ha-150891 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m    kubelet          Node ha-150891 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   NodeReady                15m    kubelet          Node ha-150891 status is now: NodeReady
	  Normal   RegisteredNode           14m    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   RegisteredNode           13m    node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Warning  ContainerGCFailed        6m14s  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m27s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   RegisteredNode           4m15s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	  Normal   RegisteredNode           3m12s  node-controller  Node ha-150891 event: Registered Node ha-150891 in Controller
	
	
	Name:               ha-150891-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_42_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:42:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:57:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 22:53:49 +0000   Wed, 31 Jul 2024 22:53:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-150891-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1608b7369bb468b8c8c5013f81b09bb
	  System UUID:                c1608b73-69bb-468b-8c8c-5013f81b09bb
	  Boot ID:                    1b6fd4e8-5623-4950-8060-fcbc7d176ce8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cwsjc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-150891-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-bz2j7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-150891-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-150891-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-nmkp9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-150891-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-150891-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m14s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-150891-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-150891-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-150891-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-150891-m02 status is now: NodeNotReady
	  Normal  Starting                 4m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s (x8 over 4m57s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s (x8 over 4m57s)  kubelet          Node ha-150891-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s (x7 over 4m57s)  kubelet          Node ha-150891-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m27s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-150891-m02 event: Registered Node ha-150891-m02 in Controller
	
	
	Name:               ha-150891-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-150891-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=ha-150891
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T22_44_46_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 22:44:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-150891-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 22:55:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 22:54:49 +0000   Wed, 31 Jul 2024 22:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.120
	  Hostname:    ha-150891-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bdcf2d763364b5cbf54f471f1e49c03
	  System UUID:                7bdcf2d7-6336-4b5c-bf54-f471f1e49c03
	  Boot ID:                    97571a69-365f-4aec-b624-9c75ef9066b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-b9dx9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-4ghcd              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-l8srs           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-150891-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-150891-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-150891-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-150891-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m27s                  node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   RegisteredNode           3m12s                  node-controller  Node ha-150891-m04 event: Registered Node ha-150891-m04 in Controller
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-150891-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-150891-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-150891-m04 has been rebooted, boot id: 97571a69-365f-4aec-b624-9c75ef9066b7
	  Normal   NodeReady                2m48s                  kubelet          Node ha-150891-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m47s)   node-controller  Node ha-150891-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul31 22:41] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.059402] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055698] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.187489] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.128918] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.269933] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +4.169130] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +3.879571] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +0.061597] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.693408] systemd-fstab-generator[1352]: Ignoring "noauto" option for root device
	[  +0.081387] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.056574] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.292402] kauditd_printk_skb: 38 callbacks suppressed
	[Jul31 22:42] kauditd_printk_skb: 26 callbacks suppressed
	[Jul31 22:52] systemd-fstab-generator[3638]: Ignoring "noauto" option for root device
	[  +0.146321] systemd-fstab-generator[3651]: Ignoring "noauto" option for root device
	[  +0.193378] systemd-fstab-generator[3665]: Ignoring "noauto" option for root device
	[  +0.165845] systemd-fstab-generator[3677]: Ignoring "noauto" option for root device
	[  +0.297616] systemd-fstab-generator[3705]: Ignoring "noauto" option for root device
	[  +0.804829] systemd-fstab-generator[3804]: Ignoring "noauto" option for root device
	[  +4.747343] kauditd_printk_skb: 122 callbacks suppressed
	[  +7.365479] kauditd_printk_skb: 85 callbacks suppressed
	[Jul31 22:53] kauditd_printk_skb: 11 callbacks suppressed
	[ +12.058061] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [31a5692b683c35e3266d20069f38f8ddb750207f7ee4bac31a9453f7c8fd32a8] <==
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/07/31 22:50:45 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-31T22:50:45.388499Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.105:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T22:50:45.388629Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.105:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T22:50:45.388793Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"38dbae10e7efb596","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-31T22:50:45.388967Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389003Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.38904Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389086Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389153Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389209Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389239Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"90e478e20277b34c"}
	{"level":"info","ts":"2024-07-31T22:50:45.389263Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389291Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389324Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389397Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389445Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389497Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.389525Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:50:45.392892Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.105:2380"}
	{"level":"info","ts":"2024-07-31T22:50:45.393077Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.105:2380"}
	{"level":"info","ts":"2024-07-31T22:50:45.393111Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-150891","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.105:2380"],"advertise-client-urls":["https://192.168.39.105:2379"]}
	
	
	==> etcd [533eebe6788d6b43158307f1805677a1d4a93ba42b1a49930bc8e20cf70bb248] <==
	{"level":"info","ts":"2024-07-31T22:54:06.602037Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.602096Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.60236Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.628514Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38dbae10e7efb596","to":"2decda6e654e6303","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-31T22:54:06.628567Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:54:06.633775Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"38dbae10e7efb596","to":"2decda6e654e6303","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-31T22:54:06.633806Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.814467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"38dbae10e7efb596 switched to configuration voters=(4097059673657554326 10440602748250993484)"}
	{"level":"info","ts":"2024-07-31T22:55:02.816896Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"f45b5855e490ef48","local-member-id":"38dbae10e7efb596","removed-remote-peer-id":"2decda6e654e6303","removed-remote-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-07-31T22:55:02.817003Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2decda6e654e6303"}
	{"level":"warn","ts":"2024-07-31T22:55:02.81728Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.817452Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"2decda6e654e6303"}
	{"level":"warn","ts":"2024-07-31T22:55:02.818044Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.818121Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.818208Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"warn","ts":"2024-07-31T22:55:02.818527Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303","error":"context canceled"}
	{"level":"warn","ts":"2024-07-31T22:55:02.818631Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"2decda6e654e6303","error":"failed to read 2decda6e654e6303 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-31T22:55:02.818683Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"warn","ts":"2024-07-31T22:55:02.819126Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303","error":"context canceled"}
	{"level":"info","ts":"2024-07-31T22:55:02.819196Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"38dbae10e7efb596","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.819274Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.819315Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"38dbae10e7efb596","removed-remote-peer-id":"2decda6e654e6303"}
	{"level":"info","ts":"2024-07-31T22:55:02.819367Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"38dbae10e7efb596","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"2decda6e654e6303"}
	{"level":"warn","ts":"2024-07-31T22:55:02.834226Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"38dbae10e7efb596","remote-peer-id-stream-handler":"38dbae10e7efb596","remote-peer-id-from":"2decda6e654e6303"}
	{"level":"warn","ts":"2024-07-31T22:55:02.834531Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"38dbae10e7efb596","remote-peer-id-stream-handler":"38dbae10e7efb596","remote-peer-id-from":"2decda6e654e6303"}
	
	
	==> kernel <==
	 22:57:37 up 16 min,  0 users,  load average: 0.25, 0.29, 0.21
	Linux ha-150891 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2138e0d9e3344e3b532f5acecf75d27a8cedbb6953fc63d24ede45cbd0006b9b] <==
	I0731 22:56:56.974330       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:57:06.974790       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:57:06.974942       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:57:06.975118       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:57:06.975333       1 main.go:299] handling current node
	I0731 22:57:06.975376       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:57:06.975448       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:57:16.982808       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:57:16.982841       1 main.go:299] handling current node
	I0731 22:57:16.982856       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:57:16.982860       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:57:16.982971       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:57:16.982976       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:57:26.973787       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:57:26.973837       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:57:26.973991       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:57:26.974023       1 main.go:299] handling current node
	I0731 22:57:26.974040       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:57:26.974048       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:57:36.982921       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:57:36.982956       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:57:36.983147       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:57:36.983160       1 main.go:299] handling current node
	I0731 22:57:36.983173       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:57:36.983179       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [6800ea54157a15a000e079b0fb7e9f943cd931198ec589b775ebb85fbeca599f] <==
	I0731 22:50:11.241105       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:50:21.242171       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:50:21.242216       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:50:21.242357       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:50:21.242378       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:50:21.242428       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:50:21.242445       1 main.go:299] handling current node
	I0731 22:50:21.242456       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:50:21.242461       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:50:31.239738       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:50:31.239781       1 main.go:299] handling current node
	I0731 22:50:31.239800       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:50:31.239808       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:50:31.239963       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:50:31.239995       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:50:31.240078       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:50:31.240086       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	I0731 22:50:41.240255       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0731 22:50:41.240301       1 main.go:299] handling current node
	I0731 22:50:41.240319       1 main.go:295] Handling node with IPs: map[192.168.39.224:{}]
	I0731 22:50:41.240324       1 main.go:322] Node ha-150891-m02 has CIDR [10.244.1.0/24] 
	I0731 22:50:41.240458       1 main.go:295] Handling node with IPs: map[192.168.39.241:{}]
	I0731 22:50:41.240482       1 main.go:322] Node ha-150891-m03 has CIDR [10.244.2.0/24] 
	I0731 22:50:41.240567       1 main.go:295] Handling node with IPs: map[192.168.39.120:{}]
	I0731 22:50:41.240586       1 main.go:322] Node ha-150891-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [3fc5a6318a06b6592b305ff451abdd474f1f8c15db3ecb3842e8b6bb78ee8927] <==
	I0731 22:52:26.562344       1 options.go:221] external host was not specified, using 192.168.39.105
	I0731 22:52:26.567357       1 server.go:148] Version: v1.30.3
	I0731 22:52:26.567441       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:52:26.975809       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0731 22:52:26.981784       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 22:52:26.988214       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 22:52:26.988249       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 22:52:26.988431       1 instance.go:299] Using reconciler: lease
	W0731 22:52:46.967384       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0731 22:52:46.970894       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0731 22:52:46.991391       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [936da742737fd9866ed6f1699fd59673b966983ce9ab155496170b8dc0d69c0f] <==
	I0731 22:53:09.090052       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 22:53:09.090245       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 22:53:09.140098       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 22:53:09.140133       1 policy_source.go:224] refreshing policies
	I0731 22:53:09.164241       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 22:53:09.164279       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 22:53:09.165643       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 22:53:09.165827       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 22:53:09.170082       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 22:53:09.171589       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 22:53:09.186764       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 22:53:09.194201       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 22:53:09.195233       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 22:53:09.195334       1 aggregator.go:165] initial CRD sync complete...
	I0731 22:53:09.195355       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 22:53:09.195361       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 22:53:09.195367       1 cache.go:39] Caches are synced for autoregister controller
	W0731 22:53:09.224167       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.224 192.168.39.241]
	I0731 22:53:09.225882       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 22:53:09.228985       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 22:53:09.242359       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0731 22:53:09.247776       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0731 22:53:10.077905       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0731 22:53:10.565652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.224 192.168.39.241]
	W0731 22:55:10.572864       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.105 192.168.39.224]
	
	
	==> kube-controller-manager [3ff53188db11c806bf35dd07eff4b1128c44be285a01574f67f09ef715b4b10e] <==
	E0731 22:55:42.048561       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:55:42.048571       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:55:42.048578       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:55:42.048584       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	I0731 22:55:51.372604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.886612ms"
	I0731 22:55:51.373361       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.608µs"
	E0731 22:56:02.049173       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:56:02.049292       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:56:02.049319       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:56:02.049347       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	E0731 22:56:02.049370       1 gc_controller.go:153] "Failed to get node" err="node \"ha-150891-m03\" not found" logger="pod-garbage-collector-controller" node="ha-150891-m03"
	I0731 22:56:02.061894       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-150891-m03"
	I0731 22:56:02.097398       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-150891-m03"
	I0731 22:56:02.097544       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8bkwq"
	I0731 22:56:02.133076       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-8bkwq"
	I0731 22:56:02.133114       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-150891-m03"
	I0731 22:56:02.157795       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-150891-m03"
	I0731 22:56:02.158027       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-150891-m03"
	I0731 22:56:02.192570       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-150891-m03"
	I0731 22:56:02.192595       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-150891-m03"
	I0731 22:56:02.227779       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-150891-m03"
	I0731 22:56:02.227813       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-150891-m03"
	I0731 22:56:02.258847       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-150891-m03"
	I0731 22:56:02.258970       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-df4cg"
	I0731 22:56:02.289049       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-df4cg"
	
	
	==> kube-controller-manager [a13dc8c74ad52362b9fef7a6a07c57b1ce5cee751af4222dbea80f7591f98aba] <==
	I0731 22:52:27.543986       1 serving.go:380] Generated self-signed cert in-memory
	I0731 22:52:27.767213       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0731 22:52:27.767253       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:52:27.768640       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 22:52:27.768837       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 22:52:27.768916       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 22:52:27.769101       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0731 22:52:48.000056       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.105:8443/healthz\": dial tcp 192.168.39.105:8443: connect: connection refused"
	
	
	==> kube-proxy [45f49431a7774dcc848076ea078606520e3dab56ea2d6c64b95924b5bb827526] <==
	E0731 22:49:30.810541       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:30.810151       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:30.811167       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:37.338203       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:37.338271       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:37.338343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:37.338393       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:37.338215       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:37.338502       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:49.626684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:49.626876       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:49.628307       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:49.628401       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:49:49.628350       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:49:49.628532       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:11.130207       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:11.130275       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:14.202489       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:14.202556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:14.202621       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:14.202680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1866": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:38.779684       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:38.779861       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1843": dial tcp 192.168.39.254:8443: connect: no route to host
	W0731 22:50:41.851299       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	E0731 22:50:41.851413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-150891&resourceVersion=1823": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [b9adf2f762249c086770be0dccdad6342b31bb791b5049eab3c303f2a4f58b6d] <==
	I0731 22:52:27.489068       1 server_linux.go:69] "Using iptables proxy"
	E0731 22:52:29.372137       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:32.442751       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:35.514237       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:41.659197       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0731 22:52:50.875078       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-150891\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0731 22:53:09.151051       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.105"]
	I0731 22:53:09.281040       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 22:53:09.281127       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 22:53:09.281146       1 server_linux.go:165] "Using iptables Proxier"
	I0731 22:53:09.284485       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 22:53:09.284648       1 server.go:872] "Version info" version="v1.30.3"
	I0731 22:53:09.284675       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 22:53:09.285893       1 config.go:192] "Starting service config controller"
	I0731 22:53:09.285924       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 22:53:09.285948       1 config.go:101] "Starting endpoint slice config controller"
	I0731 22:53:09.285951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 22:53:09.286589       1 config.go:319] "Starting node config controller"
	I0731 22:53:09.286615       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 22:53:09.386909       1 shared_informer.go:320] Caches are synced for service config
	I0731 22:53:09.386911       1 shared_informer.go:320] Caches are synced for node config
	I0731 22:53:09.387164       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [c5a522e53c2bcfb964f22459030a26eb652258dd779ed72bc4478c165621ca78] <==
	W0731 22:50:40.804554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:40.804666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:40.928617       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 22:50:40.928784       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 22:50:41.043213       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:41.043329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:41.247266       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 22:50:41.247367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 22:50:43.104616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 22:50:43.104666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 22:50:43.472849       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 22:50:43.472893       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 22:50:43.931196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 22:50:43.931370       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 22:50:44.105797       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 22:50:44.105840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 22:50:44.170514       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:44.170567       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:44.340896       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 22:50:44.340946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 22:50:44.409462       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:44.409514       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 22:50:45.101037       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:45.101079       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:50:45.312390       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e5db421039fa4ddf654b2c8427782fc8c571483506ab0654c5e1e1a9332dbe52] <==
	W0731 22:53:05.598324       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.105:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:05.598399       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.105:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:05.775244       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.105:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:05.775453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.105:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:05.887533       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.105:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:05.887678       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.105:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.047727       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.105:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.047873       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.105:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.206860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.105:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.206981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.105:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.252652       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.105:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.252872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.105:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:06.298794       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.105:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:06.298926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.105:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:07.050678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.105:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	E0731 22:53:07.050760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.105:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.105:8443: connect: connection refused
	W0731 22:53:09.096287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 22:53:09.096336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 22:53:09.129294       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 22:53:09.129340       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0731 22:53:23.505595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 22:54:59.514142       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-b9dx9\": pod busybox-fc5497c4f-b9dx9 is already assigned to node \"ha-150891-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-b9dx9" node="ha-150891-m04"
	E0731 22:54:59.514426       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod beab61e3-6b32-45ee-8139-4429cc7b3010(default/busybox-fc5497c4f-b9dx9) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-b9dx9"
	E0731 22:54:59.514560       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-b9dx9\": pod busybox-fc5497c4f-b9dx9 is already assigned to node \"ha-150891-m04\"" pod="default/busybox-fc5497c4f-b9dx9"
	I0731 22:54:59.514787       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-b9dx9" node="ha-150891-m04"
	
	
	==> kubelet <==
	Jul 31 22:53:33 ha-150891 kubelet[1359]: E0731 22:53:33.992407    1359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c482636f-76e6-4ea7-9a14-3e9d6a7a4308)\"" pod="kube-system/storage-provisioner" podUID="c482636f-76e6-4ea7-9a14-3e9d6a7a4308"
	Jul 31 22:53:47 ha-150891 kubelet[1359]: I0731 22:53:47.992758    1359 scope.go:117] "RemoveContainer" containerID="8ba788340850a696bdf93a1be5105645fba748b69b3d5e7d5b725a86782589d3"
	Jul 31 22:53:48 ha-150891 kubelet[1359]: I0731 22:53:48.992777    1359 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-150891" podUID="1b703a99-faf3-4c2d-a871-0fb6bce0b917"
	Jul 31 22:53:49 ha-150891 kubelet[1359]: I0731 22:53:49.022145    1359 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-150891"
	Jul 31 22:53:53 ha-150891 kubelet[1359]: I0731 22:53:53.012240    1359 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-150891" podStartSLOduration=4.012222715 podStartE2EDuration="4.012222715s" podCreationTimestamp="2024-07-31 22:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-31 22:53:53.011393497 +0000 UTC m=+750.167732552" watchObservedRunningTime="2024-07-31 22:53:53.012222715 +0000 UTC m=+750.168561769"
	Jul 31 22:54:23 ha-150891 kubelet[1359]: E0731 22:54:23.026212    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:54:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:54:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:54:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:54:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:55:23 ha-150891 kubelet[1359]: E0731 22:55:23.027233    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:55:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:55:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:55:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:55:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:56:23 ha-150891 kubelet[1359]: E0731 22:56:23.027285    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:56:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:56:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:56:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:56:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 22:57:23 ha-150891 kubelet[1359]: E0731 22:57:23.027634    1359 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 22:57:23 ha-150891 kubelet[1359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 22:57:23 ha-150891 kubelet[1359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 22:57:23 ha-150891 kubelet[1359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 22:57:23 ha-150891 kubelet[1359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 22:57:36.288211 1202912 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1172186/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-150891 -n ha-150891
helpers_test.go:261: (dbg) Run:  kubectl --context ha-150891 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-615814
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-615814
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-615814: exit status 82 (2m1.873851618s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-615814-m03"  ...
	* Stopping node "multinode-615814-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-615814" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615814 --wait=true -v=8 --alsologtostderr
E0731 23:14:53.722202 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 23:17:56.768309 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615814 --wait=true -v=8 --alsologtostderr: (3m22.718953842s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-615814
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-615814 -n multinode-615814
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-615814 logs -n 25: (1.515758743s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4241457848/001/cp-test_multinode-615814-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814:/home/docker/cp-test_multinode-615814-m02_multinode-615814.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814 sudo cat                                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | /home/docker/cp-test_multinode-615814-m02_multinode-615814.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m03:/home/docker/cp-test_multinode-615814-m02_multinode-615814-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814-m03 sudo cat                                   | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | /home/docker/cp-test_multinode-615814-m02_multinode-615814-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp testdata/cp-test.txt                                                | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4241457848/001/cp-test_multinode-615814-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814:/home/docker/cp-test_multinode-615814-m03_multinode-615814.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814 sudo cat                                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /home/docker/cp-test_multinode-615814-m03_multinode-615814.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m02:/home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814-m02 sudo cat                                   | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-615814 node stop m03                                                          | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	| node    | multinode-615814 node start                                                             | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC |                     |
	| stop    | -p multinode-615814                                                                     | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC |                     |
	| start   | -p multinode-615814                                                                     | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:14 UTC | 31 Jul 24 23:18 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:14:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:14:45.575980 1212267 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:14:45.576305 1212267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:14:45.576314 1212267 out.go:304] Setting ErrFile to fd 2...
	I0731 23:14:45.576319 1212267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:14:45.576509 1212267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:14:45.577096 1212267 out.go:298] Setting JSON to false
	I0731 23:14:45.578141 1212267 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":25037,"bootTime":1722442649,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:14:45.578213 1212267 start.go:139] virtualization: kvm guest
	I0731 23:14:45.580269 1212267 out.go:177] * [multinode-615814] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:14:45.581764 1212267 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:14:45.581773 1212267 notify.go:220] Checking for updates...
	I0731 23:14:45.583543 1212267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:14:45.584911 1212267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:14:45.586277 1212267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:14:45.587961 1212267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:14:45.589489 1212267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:14:45.591415 1212267 config.go:182] Loaded profile config "multinode-615814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:14:45.591551 1212267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:14:45.592213 1212267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:14:45.592322 1212267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:14:45.608963 1212267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0731 23:14:45.609459 1212267 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:14:45.610069 1212267 main.go:141] libmachine: Using API Version  1
	I0731 23:14:45.610095 1212267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:14:45.610483 1212267 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:14:45.610717 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:14:45.650425 1212267 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 23:14:45.651848 1212267 start.go:297] selected driver: kvm2
	I0731 23:14:45.651872 1212267 start.go:901] validating driver "kvm2" against &{Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:14:45.652052 1212267 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:14:45.652631 1212267 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:14:45.652743 1212267 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:14:45.670210 1212267 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:14:45.671371 1212267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:14:45.671436 1212267 cni.go:84] Creating CNI manager for ""
	I0731 23:14:45.671448 1212267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:14:45.671545 1212267 start.go:340] cluster config:
	{Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-615814 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:14:45.671744 1212267 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:14:45.673569 1212267 out.go:177] * Starting "multinode-615814" primary control-plane node in "multinode-615814" cluster
	I0731 23:14:45.674808 1212267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:14:45.674858 1212267 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 23:14:45.674867 1212267 cache.go:56] Caching tarball of preloaded images
	I0731 23:14:45.675000 1212267 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 23:14:45.675013 1212267 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 23:14:45.675144 1212267 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/config.json ...
	I0731 23:14:45.675364 1212267 start.go:360] acquireMachinesLock for multinode-615814: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:14:45.675416 1212267 start.go:364] duration metric: took 29.372µs to acquireMachinesLock for "multinode-615814"
	I0731 23:14:45.675437 1212267 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:14:45.675446 1212267 fix.go:54] fixHost starting: 
	I0731 23:14:45.675715 1212267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:14:45.675762 1212267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:14:45.691755 1212267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0731 23:14:45.692266 1212267 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:14:45.692826 1212267 main.go:141] libmachine: Using API Version  1
	I0731 23:14:45.692850 1212267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:14:45.693203 1212267 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:14:45.693433 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:14:45.693605 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetState
	I0731 23:14:45.695315 1212267 fix.go:112] recreateIfNeeded on multinode-615814: state=Running err=<nil>
	W0731 23:14:45.695354 1212267 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:14:45.697375 1212267 out.go:177] * Updating the running kvm2 "multinode-615814" VM ...
	I0731 23:14:45.698819 1212267 machine.go:94] provisionDockerMachine start ...
	I0731 23:14:45.698857 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:14:45.699223 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:45.702021 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.702548 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:45.702581 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.702823 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:45.703026 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.703211 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.703332 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:45.703511 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:45.703711 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:45.703722 1212267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:14:45.816732 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-615814
	
	I0731 23:14:45.816771 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetMachineName
	I0731 23:14:45.817066 1212267 buildroot.go:166] provisioning hostname "multinode-615814"
	I0731 23:14:45.817094 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetMachineName
	I0731 23:14:45.817300 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:45.820285 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.820665 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:45.820698 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.820831 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:45.821032 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.821220 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.821369 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:45.821584 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:45.821811 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:45.821826 1212267 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-615814 && echo "multinode-615814" | sudo tee /etc/hostname
	I0731 23:14:45.951924 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-615814
	
	I0731 23:14:45.951965 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:45.955086 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.955564 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:45.955622 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.955809 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:45.956044 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.956224 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.956354 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:45.956576 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:45.956802 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:45.956826 1212267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-615814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-615814/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-615814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:14:46.073405 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:14:46.073448 1212267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:14:46.073468 1212267 buildroot.go:174] setting up certificates
	I0731 23:14:46.073480 1212267 provision.go:84] configureAuth start
	I0731 23:14:46.073494 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetMachineName
	I0731 23:14:46.073802 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:14:46.076605 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.077074 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.077109 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.077347 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:46.079708 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.080115 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.080144 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.080324 1212267 provision.go:143] copyHostCerts
	I0731 23:14:46.080361 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:14:46.080395 1212267 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:14:46.080404 1212267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:14:46.080474 1212267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:14:46.080626 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:14:46.080649 1212267 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:14:46.080654 1212267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:14:46.080681 1212267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:14:46.080726 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:14:46.080742 1212267 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:14:46.080749 1212267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:14:46.080770 1212267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:14:46.080824 1212267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.multinode-615814 san=[127.0.0.1 192.168.39.129 localhost minikube multinode-615814]
	I0731 23:14:46.351568 1212267 provision.go:177] copyRemoteCerts
	I0731 23:14:46.351637 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:14:46.351664 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:46.354717 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.355162 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.355210 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.355390 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:46.355622 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:46.355806 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:46.355954 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:14:46.443839 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 23:14:46.443943 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:14:46.471190 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 23:14:46.471276 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:14:46.497796 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 23:14:46.497879 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 23:14:46.524609 1212267 provision.go:87] duration metric: took 451.104502ms to configureAuth
	I0731 23:14:46.524647 1212267 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:14:46.524948 1212267 config.go:182] Loaded profile config "multinode-615814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:14:46.525044 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:46.527677 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.528107 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.528143 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.528346 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:46.528595 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:46.528782 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:46.528957 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:46.529168 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:46.529343 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:46.529358 1212267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:16:17.247478 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:16:17.247520 1212267 machine.go:97] duration metric: took 1m31.548678578s to provisionDockerMachine
	I0731 23:16:17.247539 1212267 start.go:293] postStartSetup for "multinode-615814" (driver="kvm2")
	I0731 23:16:17.247550 1212267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:16:17.247569 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.247943 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:16:17.247982 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.251499 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.252068 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.252113 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.252294 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.252551 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.252757 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.252950 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:16:17.339522 1212267 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:16:17.343736 1212267 command_runner.go:130] > NAME=Buildroot
	I0731 23:16:17.343768 1212267 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:16:17.343773 1212267 command_runner.go:130] > ID=buildroot
	I0731 23:16:17.343780 1212267 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:16:17.343785 1212267 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:16:17.343927 1212267 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:16:17.343960 1212267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:16:17.344134 1212267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:16:17.344243 1212267 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:16:17.344257 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 23:16:17.344354 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:16:17.354628 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:16:17.379437 1212267 start.go:296] duration metric: took 131.879782ms for postStartSetup
	I0731 23:16:17.379497 1212267 fix.go:56] duration metric: took 1m31.704049881s for fixHost
	I0731 23:16:17.379531 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.382647 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.383049 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.383079 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.383347 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.383612 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.383822 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.383982 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.384215 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:16:17.384459 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:16:17.384500 1212267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:16:17.497032 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722467777.471916767
	
	I0731 23:16:17.497058 1212267 fix.go:216] guest clock: 1722467777.471916767
	I0731 23:16:17.497066 1212267 fix.go:229] Guest: 2024-07-31 23:16:17.471916767 +0000 UTC Remote: 2024-07-31 23:16:17.379503265 +0000 UTC m=+91.846296835 (delta=92.413502ms)
	I0731 23:16:17.497089 1212267 fix.go:200] guest clock delta is within tolerance: 92.413502ms
	I0731 23:16:17.497096 1212267 start.go:83] releasing machines lock for "multinode-615814", held for 1m31.821667272s
	I0731 23:16:17.497117 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.497423 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:16:17.500425 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.500781 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.500825 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.501069 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.501673 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.501862 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.501946 1212267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:16:17.501992 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.502109 1212267 ssh_runner.go:195] Run: cat /version.json
	I0731 23:16:17.502137 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.505007 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505250 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505447 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.505478 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505648 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.505768 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.505797 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505888 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.505977 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.506070 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.506138 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.506245 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:16:17.506420 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.506592 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:16:17.588883 1212267 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 23:16:17.589233 1212267 ssh_runner.go:195] Run: systemctl --version
	I0731 23:16:17.610253 1212267 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 23:16:17.610361 1212267 command_runner.go:130] > systemd 252 (252)
	I0731 23:16:17.610395 1212267 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 23:16:17.610456 1212267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:16:17.769476 1212267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:16:17.775462 1212267 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 23:16:17.775569 1212267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:16:17.775644 1212267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:16:17.785801 1212267 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 23:16:17.785846 1212267 start.go:495] detecting cgroup driver to use...
	I0731 23:16:17.785929 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:16:17.803844 1212267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:16:17.819207 1212267 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:16:17.819280 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:16:17.834207 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:16:17.849617 1212267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:16:18.008351 1212267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:16:18.174365 1212267 docker.go:233] disabling docker service ...
	I0731 23:16:18.174454 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:16:18.194867 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:16:18.209982 1212267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:16:18.371294 1212267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:16:18.529279 1212267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:16:18.544828 1212267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:16:18.564833 1212267 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 23:16:18.565124 1212267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 23:16:18.565200 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.576844 1212267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:16:18.576930 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.588430 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.600037 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.611808 1212267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:16:18.623846 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.635691 1212267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.647137 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.658692 1212267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:16:18.669166 1212267 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:16:18.669265 1212267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:16:18.679663 1212267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:16:18.826698 1212267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:16:25.528268 1212267 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.701521645s)
	I0731 23:16:25.528304 1212267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:16:25.528354 1212267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:16:25.533387 1212267 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 23:16:25.533426 1212267 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:16:25.533433 1212267 command_runner.go:130] > Device: 0,22	Inode: 1344        Links: 1
	I0731 23:16:25.533440 1212267 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:16:25.533447 1212267 command_runner.go:130] > Access: 2024-07-31 23:16:25.387899896 +0000
	I0731 23:16:25.533457 1212267 command_runner.go:130] > Modify: 2024-07-31 23:16:25.387899896 +0000
	I0731 23:16:25.533464 1212267 command_runner.go:130] > Change: 2024-07-31 23:16:25.387899896 +0000
	I0731 23:16:25.533469 1212267 command_runner.go:130] >  Birth: -
	I0731 23:16:25.533653 1212267 start.go:563] Will wait 60s for crictl version
	I0731 23:16:25.533718 1212267 ssh_runner.go:195] Run: which crictl
	I0731 23:16:25.537978 1212267 command_runner.go:130] > /usr/bin/crictl
	I0731 23:16:25.538063 1212267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:16:25.575186 1212267 command_runner.go:130] > Version:  0.1.0
	I0731 23:16:25.575216 1212267 command_runner.go:130] > RuntimeName:  cri-o
	I0731 23:16:25.575223 1212267 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 23:16:25.575230 1212267 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:16:25.576443 1212267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 23:16:25.576523 1212267 ssh_runner.go:195] Run: crio --version
	I0731 23:16:25.606483 1212267 command_runner.go:130] > crio version 1.29.1
	I0731 23:16:25.606517 1212267 command_runner.go:130] > Version:        1.29.1
	I0731 23:16:25.606524 1212267 command_runner.go:130] > GitCommit:      unknown
	I0731 23:16:25.606529 1212267 command_runner.go:130] > GitCommitDate:  unknown
	I0731 23:16:25.606533 1212267 command_runner.go:130] > GitTreeState:   clean
	I0731 23:16:25.606538 1212267 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 23:16:25.606542 1212267 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 23:16:25.606546 1212267 command_runner.go:130] > Compiler:       gc
	I0731 23:16:25.606550 1212267 command_runner.go:130] > Platform:       linux/amd64
	I0731 23:16:25.606555 1212267 command_runner.go:130] > Linkmode:       dynamic
	I0731 23:16:25.606559 1212267 command_runner.go:130] > BuildTags:      
	I0731 23:16:25.606564 1212267 command_runner.go:130] >   containers_image_ostree_stub
	I0731 23:16:25.606570 1212267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 23:16:25.606574 1212267 command_runner.go:130] >   btrfs_noversion
	I0731 23:16:25.606578 1212267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 23:16:25.606583 1212267 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 23:16:25.606588 1212267 command_runner.go:130] >   seccomp
	I0731 23:16:25.606593 1212267 command_runner.go:130] > LDFlags:          unknown
	I0731 23:16:25.606600 1212267 command_runner.go:130] > SeccompEnabled:   true
	I0731 23:16:25.606607 1212267 command_runner.go:130] > AppArmorEnabled:  false
	I0731 23:16:25.606719 1212267 ssh_runner.go:195] Run: crio --version
	I0731 23:16:25.635621 1212267 command_runner.go:130] > crio version 1.29.1
	I0731 23:16:25.635649 1212267 command_runner.go:130] > Version:        1.29.1
	I0731 23:16:25.635655 1212267 command_runner.go:130] > GitCommit:      unknown
	I0731 23:16:25.635660 1212267 command_runner.go:130] > GitCommitDate:  unknown
	I0731 23:16:25.635663 1212267 command_runner.go:130] > GitTreeState:   clean
	I0731 23:16:25.635669 1212267 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 23:16:25.635673 1212267 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 23:16:25.635676 1212267 command_runner.go:130] > Compiler:       gc
	I0731 23:16:25.635681 1212267 command_runner.go:130] > Platform:       linux/amd64
	I0731 23:16:25.635685 1212267 command_runner.go:130] > Linkmode:       dynamic
	I0731 23:16:25.635690 1212267 command_runner.go:130] > BuildTags:      
	I0731 23:16:25.635694 1212267 command_runner.go:130] >   containers_image_ostree_stub
	I0731 23:16:25.635738 1212267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 23:16:25.635751 1212267 command_runner.go:130] >   btrfs_noversion
	I0731 23:16:25.635759 1212267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 23:16:25.635768 1212267 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 23:16:25.635774 1212267 command_runner.go:130] >   seccomp
	I0731 23:16:25.635783 1212267 command_runner.go:130] > LDFlags:          unknown
	I0731 23:16:25.635789 1212267 command_runner.go:130] > SeccompEnabled:   true
	I0731 23:16:25.635796 1212267 command_runner.go:130] > AppArmorEnabled:  false
	I0731 23:16:25.637862 1212267 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 23:16:25.639022 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:16:25.641964 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:25.642491 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:25.642521 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:25.642810 1212267 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 23:16:25.647247 1212267 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 23:16:25.647398 1212267 kubeadm.go:883] updating cluster {Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:16:25.647593 1212267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:16:25.647689 1212267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:16:25.691330 1212267 command_runner.go:130] > {
	I0731 23:16:25.691358 1212267 command_runner.go:130] >   "images": [
	I0731 23:16:25.691362 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691370 1212267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 23:16:25.691374 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691380 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 23:16:25.691387 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691391 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691401 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 23:16:25.691408 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 23:16:25.691411 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691416 1212267 command_runner.go:130] >       "size": "87165492",
	I0731 23:16:25.691419 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691426 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691443 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691447 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691451 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691454 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691461 1212267 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 23:16:25.691468 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691473 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 23:16:25.691476 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691480 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691487 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 23:16:25.691497 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 23:16:25.691500 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691504 1212267 command_runner.go:130] >       "size": "87174707",
	I0731 23:16:25.691508 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691525 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691529 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691532 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691536 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691539 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691544 1212267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 23:16:25.691548 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691553 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 23:16:25.691557 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691561 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691568 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 23:16:25.691575 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 23:16:25.691581 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691586 1212267 command_runner.go:130] >       "size": "1363676",
	I0731 23:16:25.691589 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691594 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691598 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691602 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691605 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691611 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691616 1212267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 23:16:25.691621 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691627 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 23:16:25.691631 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691636 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691643 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 23:16:25.691656 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 23:16:25.691662 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691666 1212267 command_runner.go:130] >       "size": "31470524",
	I0731 23:16:25.691670 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691674 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691680 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691683 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691687 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691693 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691699 1212267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 23:16:25.691705 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691709 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 23:16:25.691716 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691719 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691728 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 23:16:25.691735 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 23:16:25.691740 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691744 1212267 command_runner.go:130] >       "size": "61245718",
	I0731 23:16:25.691755 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691761 1212267 command_runner.go:130] >       "username": "nonroot",
	I0731 23:16:25.691765 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691772 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691775 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691781 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691787 1212267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 23:16:25.691793 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691798 1212267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 23:16:25.691804 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691808 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691815 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 23:16:25.691824 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 23:16:25.691830 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691835 1212267 command_runner.go:130] >       "size": "150779692",
	I0731 23:16:25.691840 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.691843 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.691847 1212267 command_runner.go:130] >       },
	I0731 23:16:25.691852 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691855 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691862 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691865 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691871 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691876 1212267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 23:16:25.691882 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691887 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 23:16:25.691892 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691896 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691906 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 23:16:25.691915 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 23:16:25.691920 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691924 1212267 command_runner.go:130] >       "size": "117609954",
	I0731 23:16:25.691929 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.691933 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.691939 1212267 command_runner.go:130] >       },
	I0731 23:16:25.691942 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691948 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691951 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691957 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691960 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691968 1212267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 23:16:25.691974 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691979 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 23:16:25.691985 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691989 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692005 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 23:16:25.692015 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 23:16:25.692021 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692026 1212267 command_runner.go:130] >       "size": "112198984",
	I0731 23:16:25.692032 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.692036 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.692040 1212267 command_runner.go:130] >       },
	I0731 23:16:25.692043 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692047 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692050 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.692053 1212267 command_runner.go:130] >     },
	I0731 23:16:25.692057 1212267 command_runner.go:130] >     {
	I0731 23:16:25.692062 1212267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 23:16:25.692065 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.692070 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 23:16:25.692073 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692077 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692084 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 23:16:25.692107 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 23:16:25.692111 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692115 1212267 command_runner.go:130] >       "size": "85953945",
	I0731 23:16:25.692118 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.692122 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692126 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692130 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.692133 1212267 command_runner.go:130] >     },
	I0731 23:16:25.692136 1212267 command_runner.go:130] >     {
	I0731 23:16:25.692141 1212267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 23:16:25.692144 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.692149 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 23:16:25.692152 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692156 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692162 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 23:16:25.692169 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 23:16:25.692172 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692178 1212267 command_runner.go:130] >       "size": "63051080",
	I0731 23:16:25.692182 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.692185 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.692188 1212267 command_runner.go:130] >       },
	I0731 23:16:25.692192 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692196 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692199 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.692202 1212267 command_runner.go:130] >     },
	I0731 23:16:25.692206 1212267 command_runner.go:130] >     {
	I0731 23:16:25.692212 1212267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 23:16:25.692216 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.692221 1212267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 23:16:25.692226 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692230 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692237 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 23:16:25.692246 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 23:16:25.692249 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692253 1212267 command_runner.go:130] >       "size": "750414",
	I0731 23:16:25.692258 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.692262 1212267 command_runner.go:130] >         "value": "65535"
	I0731 23:16:25.692268 1212267 command_runner.go:130] >       },
	I0731 23:16:25.692272 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692278 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692282 1212267 command_runner.go:130] >       "pinned": true
	I0731 23:16:25.692288 1212267 command_runner.go:130] >     }
	I0731 23:16:25.692292 1212267 command_runner.go:130] >   ]
	I0731 23:16:25.692297 1212267 command_runner.go:130] > }
	I0731 23:16:25.693023 1212267 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:16:25.693046 1212267 crio.go:433] Images already preloaded, skipping extraction
	I0731 23:16:25.693102 1212267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:16:25.727290 1212267 command_runner.go:130] > {
	I0731 23:16:25.727316 1212267 command_runner.go:130] >   "images": [
	I0731 23:16:25.727322 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727335 1212267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 23:16:25.727342 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727356 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 23:16:25.727362 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727368 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727380 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 23:16:25.727393 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 23:16:25.727408 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727417 1212267 command_runner.go:130] >       "size": "87165492",
	I0731 23:16:25.727425 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727432 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727444 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727454 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727461 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727466 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727472 1212267 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 23:16:25.727477 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727485 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 23:16:25.727491 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727499 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727511 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 23:16:25.727522 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 23:16:25.727532 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727538 1212267 command_runner.go:130] >       "size": "87174707",
	I0731 23:16:25.727545 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727557 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727565 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727574 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727579 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727588 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727599 1212267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 23:16:25.727609 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727617 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 23:16:25.727625 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727632 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727646 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 23:16:25.727655 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 23:16:25.727665 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727676 1212267 command_runner.go:130] >       "size": "1363676",
	I0731 23:16:25.727685 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727691 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727701 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727711 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727721 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727734 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727744 1212267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 23:16:25.727754 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727763 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 23:16:25.727772 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727778 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727793 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 23:16:25.727813 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 23:16:25.727820 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727825 1212267 command_runner.go:130] >       "size": "31470524",
	I0731 23:16:25.727830 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727835 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727845 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727855 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727864 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727870 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727882 1212267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 23:16:25.727891 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727902 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 23:16:25.727908 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727912 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727927 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 23:16:25.727943 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 23:16:25.727951 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727958 1212267 command_runner.go:130] >       "size": "61245718",
	I0731 23:16:25.727967 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727977 1212267 command_runner.go:130] >       "username": "nonroot",
	I0731 23:16:25.727986 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727992 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727995 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728004 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728018 1212267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 23:16:25.728028 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728038 1212267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 23:16:25.728044 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728056 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728070 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 23:16:25.728080 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 23:16:25.728104 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728114 1212267 command_runner.go:130] >       "size": "150779692",
	I0731 23:16:25.728122 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728129 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728137 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728147 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728156 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728165 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728174 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728180 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728192 1212267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 23:16:25.728203 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728211 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 23:16:25.728220 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728230 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728244 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 23:16:25.728257 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 23:16:25.728262 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728269 1212267 command_runner.go:130] >       "size": "117609954",
	I0731 23:16:25.728278 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728288 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728294 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728303 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728311 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728320 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728329 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728337 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728343 1212267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 23:16:25.728351 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728359 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 23:16:25.728368 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728375 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728399 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 23:16:25.728416 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 23:16:25.728424 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728430 1212267 command_runner.go:130] >       "size": "112198984",
	I0731 23:16:25.728437 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728444 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728453 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728460 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728466 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728474 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728482 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728488 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728501 1212267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 23:16:25.728508 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728515 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 23:16:25.728519 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728526 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728540 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 23:16:25.728556 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 23:16:25.728564 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728574 1212267 command_runner.go:130] >       "size": "85953945",
	I0731 23:16:25.728583 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.728592 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728599 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728603 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728611 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728619 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728633 1212267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 23:16:25.728643 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728651 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 23:16:25.728660 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728667 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728680 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 23:16:25.728692 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 23:16:25.728701 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728711 1212267 command_runner.go:130] >       "size": "63051080",
	I0731 23:16:25.728720 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728747 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728756 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728763 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728770 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728774 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728782 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728791 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728803 1212267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 23:16:25.728813 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728822 1212267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 23:16:25.728830 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728840 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728852 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 23:16:25.728863 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 23:16:25.728874 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728884 1212267 command_runner.go:130] >       "size": "750414",
	I0731 23:16:25.728893 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728904 1212267 command_runner.go:130] >         "value": "65535"
	I0731 23:16:25.728913 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728921 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728931 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728938 1212267 command_runner.go:130] >       "pinned": true
	I0731 23:16:25.728941 1212267 command_runner.go:130] >     }
	I0731 23:16:25.728949 1212267 command_runner.go:130] >   ]
	I0731 23:16:25.728954 1212267 command_runner.go:130] > }
	I0731 23:16:25.729128 1212267 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:16:25.729145 1212267 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:16:25.729156 1212267 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.30.3 crio true true} ...
	I0731 23:16:25.729279 1212267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-615814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:16:25.729370 1212267 ssh_runner.go:195] Run: crio config
	I0731 23:16:25.771287 1212267 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 23:16:25.771314 1212267 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 23:16:25.771325 1212267 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 23:16:25.771328 1212267 command_runner.go:130] > #
	I0731 23:16:25.771337 1212267 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 23:16:25.771347 1212267 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 23:16:25.771356 1212267 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 23:16:25.771366 1212267 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 23:16:25.771372 1212267 command_runner.go:130] > # reload'.
	I0731 23:16:25.771382 1212267 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 23:16:25.771396 1212267 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 23:16:25.771406 1212267 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 23:16:25.771415 1212267 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 23:16:25.771423 1212267 command_runner.go:130] > [crio]
	I0731 23:16:25.771433 1212267 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 23:16:25.771445 1212267 command_runner.go:130] > # containers images, in this directory.
	I0731 23:16:25.771452 1212267 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 23:16:25.771468 1212267 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 23:16:25.771478 1212267 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 23:16:25.771488 1212267 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 23:16:25.771498 1212267 command_runner.go:130] > # imagestore = ""
	I0731 23:16:25.771508 1212267 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 23:16:25.771518 1212267 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 23:16:25.771528 1212267 command_runner.go:130] > storage_driver = "overlay"
	I0731 23:16:25.771536 1212267 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 23:16:25.771547 1212267 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 23:16:25.771556 1212267 command_runner.go:130] > storage_option = [
	I0731 23:16:25.771566 1212267 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 23:16:25.771576 1212267 command_runner.go:130] > ]
	I0731 23:16:25.771586 1212267 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 23:16:25.771601 1212267 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 23:16:25.771610 1212267 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 23:16:25.771618 1212267 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 23:16:25.771633 1212267 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 23:16:25.771644 1212267 command_runner.go:130] > # always happen on a node reboot
	I0731 23:16:25.771656 1212267 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 23:16:25.771674 1212267 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 23:16:25.771687 1212267 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 23:16:25.771697 1212267 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 23:16:25.771705 1212267 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 23:16:25.771719 1212267 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 23:16:25.771733 1212267 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 23:16:25.771740 1212267 command_runner.go:130] > # internal_wipe = true
	I0731 23:16:25.771748 1212267 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 23:16:25.771757 1212267 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 23:16:25.771762 1212267 command_runner.go:130] > # internal_repair = false
	I0731 23:16:25.771775 1212267 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 23:16:25.771787 1212267 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 23:16:25.771799 1212267 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 23:16:25.771811 1212267 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 23:16:25.771821 1212267 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 23:16:25.771830 1212267 command_runner.go:130] > [crio.api]
	I0731 23:16:25.771839 1212267 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 23:16:25.771850 1212267 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 23:16:25.771859 1212267 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 23:16:25.771869 1212267 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 23:16:25.771880 1212267 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 23:16:25.771891 1212267 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 23:16:25.771900 1212267 command_runner.go:130] > # stream_port = "0"
	I0731 23:16:25.771909 1212267 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 23:16:25.771919 1212267 command_runner.go:130] > # stream_enable_tls = false
	I0731 23:16:25.771928 1212267 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 23:16:25.771938 1212267 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 23:16:25.771948 1212267 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 23:16:25.771962 1212267 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 23:16:25.771971 1212267 command_runner.go:130] > # minutes.
	I0731 23:16:25.771978 1212267 command_runner.go:130] > # stream_tls_cert = ""
	I0731 23:16:25.771991 1212267 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 23:16:25.772003 1212267 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 23:16:25.772015 1212267 command_runner.go:130] > # stream_tls_key = ""
	I0731 23:16:25.772028 1212267 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 23:16:25.772043 1212267 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 23:16:25.772062 1212267 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 23:16:25.772072 1212267 command_runner.go:130] > # stream_tls_ca = ""
	I0731 23:16:25.772084 1212267 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 23:16:25.772111 1212267 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 23:16:25.772125 1212267 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 23:16:25.772133 1212267 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 23:16:25.772147 1212267 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 23:16:25.772158 1212267 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 23:16:25.772168 1212267 command_runner.go:130] > [crio.runtime]
	I0731 23:16:25.772178 1212267 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 23:16:25.772193 1212267 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 23:16:25.772203 1212267 command_runner.go:130] > # "nofile=1024:2048"
	I0731 23:16:25.772213 1212267 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 23:16:25.772222 1212267 command_runner.go:130] > # default_ulimits = [
	I0731 23:16:25.772227 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.772237 1212267 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 23:16:25.772247 1212267 command_runner.go:130] > # no_pivot = false
	I0731 23:16:25.772256 1212267 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 23:16:25.772269 1212267 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 23:16:25.772277 1212267 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 23:16:25.772290 1212267 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 23:16:25.772302 1212267 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 23:16:25.772314 1212267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 23:16:25.772325 1212267 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 23:16:25.772332 1212267 command_runner.go:130] > # Cgroup setting for conmon
	I0731 23:16:25.772346 1212267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 23:16:25.772355 1212267 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 23:16:25.772365 1212267 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 23:16:25.772376 1212267 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 23:16:25.772386 1212267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 23:16:25.772394 1212267 command_runner.go:130] > conmon_env = [
	I0731 23:16:25.772400 1212267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 23:16:25.772406 1212267 command_runner.go:130] > ]
	I0731 23:16:25.772411 1212267 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 23:16:25.772418 1212267 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 23:16:25.772424 1212267 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 23:16:25.772433 1212267 command_runner.go:130] > # default_env = [
	I0731 23:16:25.772438 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.772450 1212267 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 23:16:25.772462 1212267 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 23:16:25.772470 1212267 command_runner.go:130] > # selinux = false
	I0731 23:16:25.772481 1212267 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 23:16:25.772494 1212267 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 23:16:25.772504 1212267 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 23:16:25.772511 1212267 command_runner.go:130] > # seccomp_profile = ""
	I0731 23:16:25.772521 1212267 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 23:16:25.772533 1212267 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 23:16:25.772546 1212267 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 23:16:25.772557 1212267 command_runner.go:130] > # which might increase security.
	I0731 23:16:25.772564 1212267 command_runner.go:130] > # This option is currently deprecated,
	I0731 23:16:25.772577 1212267 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 23:16:25.772588 1212267 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 23:16:25.772599 1212267 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 23:16:25.772612 1212267 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 23:16:25.772625 1212267 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 23:16:25.772637 1212267 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 23:16:25.772649 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.772659 1212267 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 23:16:25.772668 1212267 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 23:16:25.772678 1212267 command_runner.go:130] > # the cgroup blockio controller.
	I0731 23:16:25.772686 1212267 command_runner.go:130] > # blockio_config_file = ""
	I0731 23:16:25.772699 1212267 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 23:16:25.772708 1212267 command_runner.go:130] > # blockio parameters.
	I0731 23:16:25.772716 1212267 command_runner.go:130] > # blockio_reload = false
	I0731 23:16:25.772729 1212267 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 23:16:25.772739 1212267 command_runner.go:130] > # irqbalance daemon.
	I0731 23:16:25.772748 1212267 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 23:16:25.772761 1212267 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 23:16:25.772777 1212267 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 23:16:25.772792 1212267 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 23:16:25.772804 1212267 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 23:16:25.772818 1212267 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 23:16:25.772829 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.772840 1212267 command_runner.go:130] > # rdt_config_file = ""
	I0731 23:16:25.772852 1212267 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 23:16:25.772861 1212267 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 23:16:25.772899 1212267 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 23:16:25.772912 1212267 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 23:16:25.772922 1212267 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 23:16:25.772931 1212267 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 23:16:25.772938 1212267 command_runner.go:130] > # will be added.
	I0731 23:16:25.772949 1212267 command_runner.go:130] > # default_capabilities = [
	I0731 23:16:25.772954 1212267 command_runner.go:130] > # 	"CHOWN",
	I0731 23:16:25.772960 1212267 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 23:16:25.772969 1212267 command_runner.go:130] > # 	"FSETID",
	I0731 23:16:25.772976 1212267 command_runner.go:130] > # 	"FOWNER",
	I0731 23:16:25.772982 1212267 command_runner.go:130] > # 	"SETGID",
	I0731 23:16:25.772990 1212267 command_runner.go:130] > # 	"SETUID",
	I0731 23:16:25.772997 1212267 command_runner.go:130] > # 	"SETPCAP",
	I0731 23:16:25.773006 1212267 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 23:16:25.773015 1212267 command_runner.go:130] > # 	"KILL",
	I0731 23:16:25.773021 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773032 1212267 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 23:16:25.773043 1212267 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 23:16:25.773053 1212267 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 23:16:25.773062 1212267 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 23:16:25.773075 1212267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 23:16:25.773085 1212267 command_runner.go:130] > default_sysctls = [
	I0731 23:16:25.773093 1212267 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 23:16:25.773100 1212267 command_runner.go:130] > ]
	I0731 23:16:25.773109 1212267 command_runner.go:130] > # List of devices on the host that a
	I0731 23:16:25.773122 1212267 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 23:16:25.773129 1212267 command_runner.go:130] > # allowed_devices = [
	I0731 23:16:25.773135 1212267 command_runner.go:130] > # 	"/dev/fuse",
	I0731 23:16:25.773140 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773147 1212267 command_runner.go:130] > # List of additional devices. specified as
	I0731 23:16:25.773160 1212267 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 23:16:25.773173 1212267 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 23:16:25.773186 1212267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 23:16:25.773195 1212267 command_runner.go:130] > # additional_devices = [
	I0731 23:16:25.773201 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773211 1212267 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 23:16:25.773220 1212267 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 23:16:25.773226 1212267 command_runner.go:130] > # 	"/etc/cdi",
	I0731 23:16:25.773232 1212267 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 23:16:25.773241 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773251 1212267 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 23:16:25.773261 1212267 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 23:16:25.773265 1212267 command_runner.go:130] > # Defaults to false.
	I0731 23:16:25.773270 1212267 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 23:16:25.773278 1212267 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 23:16:25.773284 1212267 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 23:16:25.773292 1212267 command_runner.go:130] > # hooks_dir = [
	I0731 23:16:25.773300 1212267 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 23:16:25.773308 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773318 1212267 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 23:16:25.773331 1212267 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 23:16:25.773339 1212267 command_runner.go:130] > # its default mounts from the following two files:
	I0731 23:16:25.773348 1212267 command_runner.go:130] > #
	I0731 23:16:25.773357 1212267 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 23:16:25.773371 1212267 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 23:16:25.773383 1212267 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 23:16:25.773391 1212267 command_runner.go:130] > #
	I0731 23:16:25.773401 1212267 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 23:16:25.773414 1212267 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 23:16:25.773421 1212267 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 23:16:25.773426 1212267 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 23:16:25.773430 1212267 command_runner.go:130] > #
	I0731 23:16:25.773434 1212267 command_runner.go:130] > # default_mounts_file = ""
	I0731 23:16:25.773439 1212267 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 23:16:25.773449 1212267 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 23:16:25.773458 1212267 command_runner.go:130] > pids_limit = 1024
	I0731 23:16:25.773468 1212267 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 23:16:25.773483 1212267 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 23:16:25.773496 1212267 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 23:16:25.773512 1212267 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 23:16:25.773519 1212267 command_runner.go:130] > # log_size_max = -1
	I0731 23:16:25.773532 1212267 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 23:16:25.773542 1212267 command_runner.go:130] > # log_to_journald = false
	I0731 23:16:25.773552 1212267 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 23:16:25.773560 1212267 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 23:16:25.773566 1212267 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 23:16:25.773573 1212267 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 23:16:25.773579 1212267 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 23:16:25.773586 1212267 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 23:16:25.773590 1212267 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 23:16:25.773594 1212267 command_runner.go:130] > # read_only = false
	I0731 23:16:25.773602 1212267 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 23:16:25.773610 1212267 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 23:16:25.773619 1212267 command_runner.go:130] > # live configuration reload.
	I0731 23:16:25.773625 1212267 command_runner.go:130] > # log_level = "info"
	I0731 23:16:25.773639 1212267 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 23:16:25.773651 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.773660 1212267 command_runner.go:130] > # log_filter = ""
	I0731 23:16:25.773669 1212267 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 23:16:25.773682 1212267 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 23:16:25.773692 1212267 command_runner.go:130] > # separated by comma.
	I0731 23:16:25.773704 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773714 1212267 command_runner.go:130] > # uid_mappings = ""
	I0731 23:16:25.773724 1212267 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 23:16:25.773735 1212267 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 23:16:25.773742 1212267 command_runner.go:130] > # separated by comma.
	I0731 23:16:25.773754 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773762 1212267 command_runner.go:130] > # gid_mappings = ""
	I0731 23:16:25.773771 1212267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 23:16:25.773782 1212267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 23:16:25.773789 1212267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 23:16:25.773798 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773804 1212267 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 23:16:25.773812 1212267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 23:16:25.773820 1212267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 23:16:25.773827 1212267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 23:16:25.773836 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773844 1212267 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 23:16:25.773849 1212267 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 23:16:25.773857 1212267 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 23:16:25.773865 1212267 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 23:16:25.773869 1212267 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 23:16:25.773875 1212267 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 23:16:25.773882 1212267 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 23:16:25.773887 1212267 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 23:16:25.773894 1212267 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 23:16:25.773897 1212267 command_runner.go:130] > drop_infra_ctr = false
	I0731 23:16:25.773905 1212267 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 23:16:25.773910 1212267 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 23:16:25.773919 1212267 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 23:16:25.773925 1212267 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 23:16:25.773932 1212267 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 23:16:25.773939 1212267 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 23:16:25.773945 1212267 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 23:16:25.773953 1212267 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 23:16:25.773956 1212267 command_runner.go:130] > # shared_cpuset = ""
	I0731 23:16:25.773962 1212267 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 23:16:25.773968 1212267 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 23:16:25.773972 1212267 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 23:16:25.773979 1212267 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 23:16:25.773986 1212267 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 23:16:25.773991 1212267 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 23:16:25.773999 1212267 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 23:16:25.774003 1212267 command_runner.go:130] > # enable_criu_support = false
	I0731 23:16:25.774011 1212267 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 23:16:25.774019 1212267 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 23:16:25.774023 1212267 command_runner.go:130] > # enable_pod_events = false
	I0731 23:16:25.774031 1212267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 23:16:25.774037 1212267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 23:16:25.774043 1212267 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 23:16:25.774049 1212267 command_runner.go:130] > # default_runtime = "runc"
	I0731 23:16:25.774054 1212267 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 23:16:25.774063 1212267 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 23:16:25.774074 1212267 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 23:16:25.774081 1212267 command_runner.go:130] > # creation as a file is not desired either.
	I0731 23:16:25.774089 1212267 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 23:16:25.774095 1212267 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 23:16:25.774100 1212267 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 23:16:25.774106 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.774112 1212267 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 23:16:25.774120 1212267 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 23:16:25.774127 1212267 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 23:16:25.774134 1212267 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 23:16:25.774138 1212267 command_runner.go:130] > #
	I0731 23:16:25.774142 1212267 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 23:16:25.774149 1212267 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 23:16:25.774170 1212267 command_runner.go:130] > # runtime_type = "oci"
	I0731 23:16:25.774176 1212267 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 23:16:25.774181 1212267 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 23:16:25.774188 1212267 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 23:16:25.774192 1212267 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 23:16:25.774198 1212267 command_runner.go:130] > # monitor_env = []
	I0731 23:16:25.774203 1212267 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 23:16:25.774209 1212267 command_runner.go:130] > # allowed_annotations = []
	I0731 23:16:25.774214 1212267 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 23:16:25.774219 1212267 command_runner.go:130] > # Where:
	I0731 23:16:25.774224 1212267 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 23:16:25.774232 1212267 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 23:16:25.774239 1212267 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 23:16:25.774246 1212267 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 23:16:25.774253 1212267 command_runner.go:130] > #   in $PATH.
	I0731 23:16:25.774259 1212267 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 23:16:25.774265 1212267 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 23:16:25.774271 1212267 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 23:16:25.774277 1212267 command_runner.go:130] > #   state.
	I0731 23:16:25.774283 1212267 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 23:16:25.774291 1212267 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 23:16:25.774297 1212267 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 23:16:25.774304 1212267 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 23:16:25.774310 1212267 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 23:16:25.774318 1212267 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 23:16:25.774322 1212267 command_runner.go:130] > #   The currently recognized values are:
	I0731 23:16:25.774330 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 23:16:25.774337 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 23:16:25.774345 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 23:16:25.774351 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 23:16:25.774360 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 23:16:25.774365 1212267 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 23:16:25.774373 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 23:16:25.774381 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 23:16:25.774386 1212267 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 23:16:25.774393 1212267 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 23:16:25.774399 1212267 command_runner.go:130] > #   deprecated option "conmon".
	I0731 23:16:25.774408 1212267 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 23:16:25.774413 1212267 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 23:16:25.774421 1212267 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 23:16:25.774426 1212267 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 23:16:25.774434 1212267 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 23:16:25.774439 1212267 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 23:16:25.774448 1212267 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 23:16:25.774453 1212267 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 23:16:25.774457 1212267 command_runner.go:130] > #
	I0731 23:16:25.774461 1212267 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 23:16:25.774466 1212267 command_runner.go:130] > #
	I0731 23:16:25.774472 1212267 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 23:16:25.774480 1212267 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 23:16:25.774483 1212267 command_runner.go:130] > #
	I0731 23:16:25.774489 1212267 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 23:16:25.774497 1212267 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 23:16:25.774500 1212267 command_runner.go:130] > #
	I0731 23:16:25.774506 1212267 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 23:16:25.774511 1212267 command_runner.go:130] > # feature.
	I0731 23:16:25.774517 1212267 command_runner.go:130] > #
	I0731 23:16:25.774523 1212267 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 23:16:25.774532 1212267 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 23:16:25.774537 1212267 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 23:16:25.774545 1212267 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 23:16:25.774551 1212267 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 23:16:25.774556 1212267 command_runner.go:130] > #
	I0731 23:16:25.774562 1212267 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 23:16:25.774570 1212267 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 23:16:25.774573 1212267 command_runner.go:130] > #
	I0731 23:16:25.774580 1212267 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 23:16:25.774588 1212267 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 23:16:25.774591 1212267 command_runner.go:130] > #
	I0731 23:16:25.774597 1212267 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 23:16:25.774605 1212267 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 23:16:25.774608 1212267 command_runner.go:130] > # limitation.
	I0731 23:16:25.774614 1212267 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 23:16:25.774618 1212267 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 23:16:25.774622 1212267 command_runner.go:130] > runtime_type = "oci"
	I0731 23:16:25.774626 1212267 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 23:16:25.774630 1212267 command_runner.go:130] > runtime_config_path = ""
	I0731 23:16:25.774635 1212267 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 23:16:25.774641 1212267 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 23:16:25.774646 1212267 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 23:16:25.774651 1212267 command_runner.go:130] > monitor_env = [
	I0731 23:16:25.774657 1212267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 23:16:25.774662 1212267 command_runner.go:130] > ]
	I0731 23:16:25.774667 1212267 command_runner.go:130] > privileged_without_host_devices = false
	I0731 23:16:25.774674 1212267 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 23:16:25.774680 1212267 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 23:16:25.774687 1212267 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 23:16:25.774696 1212267 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 23:16:25.774705 1212267 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 23:16:25.774710 1212267 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 23:16:25.774721 1212267 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 23:16:25.774731 1212267 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 23:16:25.774739 1212267 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 23:16:25.774746 1212267 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 23:16:25.774749 1212267 command_runner.go:130] > # Example:
	I0731 23:16:25.774753 1212267 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 23:16:25.774758 1212267 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 23:16:25.774762 1212267 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 23:16:25.774770 1212267 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 23:16:25.774774 1212267 command_runner.go:130] > # cpuset = 0
	I0731 23:16:25.774778 1212267 command_runner.go:130] > # cpushares = "0-1"
	I0731 23:16:25.774781 1212267 command_runner.go:130] > # Where:
	I0731 23:16:25.774785 1212267 command_runner.go:130] > # The workload name is workload-type.
	I0731 23:16:25.774791 1212267 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 23:16:25.774796 1212267 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 23:16:25.774801 1212267 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 23:16:25.774809 1212267 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 23:16:25.774814 1212267 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 23:16:25.774818 1212267 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 23:16:25.774824 1212267 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 23:16:25.774829 1212267 command_runner.go:130] > # Default value is set to true
	I0731 23:16:25.774833 1212267 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 23:16:25.774838 1212267 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 23:16:25.774843 1212267 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 23:16:25.774847 1212267 command_runner.go:130] > # Default value is set to 'false'
	I0731 23:16:25.774850 1212267 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 23:16:25.774856 1212267 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 23:16:25.774859 1212267 command_runner.go:130] > #
	I0731 23:16:25.774864 1212267 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 23:16:25.774870 1212267 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 23:16:25.774875 1212267 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 23:16:25.774881 1212267 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 23:16:25.774886 1212267 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 23:16:25.774889 1212267 command_runner.go:130] > [crio.image]
	I0731 23:16:25.774893 1212267 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 23:16:25.774897 1212267 command_runner.go:130] > # default_transport = "docker://"
	I0731 23:16:25.774903 1212267 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 23:16:25.774910 1212267 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 23:16:25.774913 1212267 command_runner.go:130] > # global_auth_file = ""
	I0731 23:16:25.774918 1212267 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 23:16:25.774926 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.774930 1212267 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 23:16:25.774936 1212267 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 23:16:25.774941 1212267 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 23:16:25.774945 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.774949 1212267 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 23:16:25.774954 1212267 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 23:16:25.774959 1212267 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 23:16:25.774965 1212267 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 23:16:25.774974 1212267 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 23:16:25.774978 1212267 command_runner.go:130] > # pause_command = "/pause"
	I0731 23:16:25.774986 1212267 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 23:16:25.774994 1212267 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 23:16:25.775003 1212267 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 23:16:25.775010 1212267 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 23:16:25.775016 1212267 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 23:16:25.775024 1212267 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 23:16:25.775030 1212267 command_runner.go:130] > # pinned_images = [
	I0731 23:16:25.775033 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775041 1212267 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 23:16:25.775048 1212267 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 23:16:25.775056 1212267 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 23:16:25.775064 1212267 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 23:16:25.775069 1212267 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 23:16:25.775075 1212267 command_runner.go:130] > # signature_policy = ""
	I0731 23:16:25.775080 1212267 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 23:16:25.775088 1212267 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 23:16:25.775096 1212267 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 23:16:25.775102 1212267 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 23:16:25.775109 1212267 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 23:16:25.775114 1212267 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 23:16:25.775122 1212267 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 23:16:25.775130 1212267 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 23:16:25.775135 1212267 command_runner.go:130] > # changing them here.
	I0731 23:16:25.775144 1212267 command_runner.go:130] > # insecure_registries = [
	I0731 23:16:25.775147 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775154 1212267 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 23:16:25.775161 1212267 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 23:16:25.775165 1212267 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 23:16:25.775172 1212267 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 23:16:25.775176 1212267 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 23:16:25.775184 1212267 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 23:16:25.775191 1212267 command_runner.go:130] > # CNI plugins.
	I0731 23:16:25.775194 1212267 command_runner.go:130] > [crio.network]
	I0731 23:16:25.775202 1212267 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 23:16:25.775208 1212267 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 23:16:25.775214 1212267 command_runner.go:130] > # cni_default_network = ""
	I0731 23:16:25.775219 1212267 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 23:16:25.775225 1212267 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 23:16:25.775231 1212267 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 23:16:25.775236 1212267 command_runner.go:130] > # plugin_dirs = [
	I0731 23:16:25.775243 1212267 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 23:16:25.775249 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775256 1212267 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 23:16:25.775262 1212267 command_runner.go:130] > [crio.metrics]
	I0731 23:16:25.775266 1212267 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 23:16:25.775272 1212267 command_runner.go:130] > enable_metrics = true
	I0731 23:16:25.775277 1212267 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 23:16:25.775284 1212267 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 23:16:25.775290 1212267 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 23:16:25.775299 1212267 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 23:16:25.775305 1212267 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 23:16:25.775311 1212267 command_runner.go:130] > # metrics_collectors = [
	I0731 23:16:25.775314 1212267 command_runner.go:130] > # 	"operations",
	I0731 23:16:25.775319 1212267 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 23:16:25.775326 1212267 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 23:16:25.775330 1212267 command_runner.go:130] > # 	"operations_errors",
	I0731 23:16:25.775334 1212267 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 23:16:25.775338 1212267 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 23:16:25.775343 1212267 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 23:16:25.775349 1212267 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 23:16:25.775353 1212267 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 23:16:25.775359 1212267 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 23:16:25.775363 1212267 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 23:16:25.775370 1212267 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 23:16:25.775375 1212267 command_runner.go:130] > # 	"containers_oom_total",
	I0731 23:16:25.775379 1212267 command_runner.go:130] > # 	"containers_oom",
	I0731 23:16:25.775386 1212267 command_runner.go:130] > # 	"processes_defunct",
	I0731 23:16:25.775396 1212267 command_runner.go:130] > # 	"operations_total",
	I0731 23:16:25.775403 1212267 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 23:16:25.775407 1212267 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 23:16:25.775412 1212267 command_runner.go:130] > # 	"operations_errors_total",
	I0731 23:16:25.775417 1212267 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 23:16:25.775423 1212267 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 23:16:25.775428 1212267 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 23:16:25.775432 1212267 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 23:16:25.775436 1212267 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 23:16:25.775440 1212267 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 23:16:25.775445 1212267 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 23:16:25.775450 1212267 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 23:16:25.775454 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775459 1212267 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 23:16:25.775465 1212267 command_runner.go:130] > # metrics_port = 9090
	I0731 23:16:25.775469 1212267 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 23:16:25.775474 1212267 command_runner.go:130] > # metrics_socket = ""
	I0731 23:16:25.775479 1212267 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 23:16:25.775486 1212267 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 23:16:25.775492 1212267 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 23:16:25.775496 1212267 command_runner.go:130] > # certificate on any modification event.
	I0731 23:16:25.775502 1212267 command_runner.go:130] > # metrics_cert = ""
	I0731 23:16:25.775507 1212267 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 23:16:25.775513 1212267 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 23:16:25.775522 1212267 command_runner.go:130] > # metrics_key = ""
	I0731 23:16:25.775530 1212267 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 23:16:25.775537 1212267 command_runner.go:130] > [crio.tracing]
	I0731 23:16:25.775546 1212267 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 23:16:25.775554 1212267 command_runner.go:130] > # enable_tracing = false
	I0731 23:16:25.775562 1212267 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 23:16:25.775572 1212267 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 23:16:25.775580 1212267 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 23:16:25.775588 1212267 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 23:16:25.775592 1212267 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 23:16:25.775595 1212267 command_runner.go:130] > [crio.nri]
	I0731 23:16:25.775600 1212267 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 23:16:25.775603 1212267 command_runner.go:130] > # enable_nri = false
	I0731 23:16:25.775608 1212267 command_runner.go:130] > # NRI socket to listen on.
	I0731 23:16:25.775612 1212267 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 23:16:25.775617 1212267 command_runner.go:130] > # NRI plugin directory to use.
	I0731 23:16:25.775623 1212267 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 23:16:25.775628 1212267 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 23:16:25.775635 1212267 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 23:16:25.775640 1212267 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 23:16:25.775647 1212267 command_runner.go:130] > # nri_disable_connections = false
	I0731 23:16:25.775653 1212267 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 23:16:25.775661 1212267 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 23:16:25.775669 1212267 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 23:16:25.775679 1212267 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 23:16:25.775689 1212267 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 23:16:25.775697 1212267 command_runner.go:130] > [crio.stats]
	I0731 23:16:25.775703 1212267 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 23:16:25.775708 1212267 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 23:16:25.775713 1212267 command_runner.go:130] > # stats_collection_period = 0
	I0731 23:16:25.775735 1212267 command_runner.go:130] ! time="2024-07-31 23:16:25.737095924Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 23:16:25.775749 1212267 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 23:16:25.775873 1212267 cni.go:84] Creating CNI manager for ""
	I0731 23:16:25.775884 1212267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:16:25.775893 1212267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:16:25.775918 1212267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-615814 NodeName:multinode-615814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:16:25.776050 1212267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-615814"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:16:25.776132 1212267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:16:25.786883 1212267 command_runner.go:130] > kubeadm
	I0731 23:16:25.786929 1212267 command_runner.go:130] > kubectl
	I0731 23:16:25.786936 1212267 command_runner.go:130] > kubelet
	I0731 23:16:25.786982 1212267 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:16:25.787054 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:16:25.797190 1212267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 23:16:25.814789 1212267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:16:25.832074 1212267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 23:16:25.849760 1212267 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0731 23:16:25.854052 1212267 command_runner.go:130] > 192.168.39.129	control-plane.minikube.internal
	I0731 23:16:25.854160 1212267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:16:25.995016 1212267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:16:26.011457 1212267 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814 for IP: 192.168.39.129
	I0731 23:16:26.011491 1212267 certs.go:194] generating shared ca certs ...
	I0731 23:16:26.011517 1212267 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:16:26.011681 1212267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 23:16:26.011725 1212267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 23:16:26.011735 1212267 certs.go:256] generating profile certs ...
	I0731 23:16:26.011831 1212267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/client.key
	I0731 23:16:26.011892 1212267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.key.0892758f
	I0731 23:16:26.011925 1212267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.key
	I0731 23:16:26.011936 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:16:26.011948 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:16:26.011961 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:16:26.011976 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:16:26.011992 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 23:16:26.012006 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 23:16:26.012018 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 23:16:26.012031 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 23:16:26.012080 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 23:16:26.012138 1212267 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 23:16:26.012149 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 23:16:26.012171 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 23:16:26.012195 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:16:26.012219 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 23:16:26.012262 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:16:26.012290 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.012306 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.012319 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.012930 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:16:26.038706 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:16:26.064227 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:16:26.089928 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 23:16:26.115013 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 23:16:26.140530 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 23:16:26.165527 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:16:26.191798 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:16:26.217284 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 23:16:26.242456 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 23:16:26.268015 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:16:26.293740 1212267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:16:26.311471 1212267 ssh_runner.go:195] Run: openssl version
	I0731 23:16:26.317834 1212267 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:16:26.317939 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 23:16:26.330111 1212267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.334943 1212267 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.334986 1212267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.335038 1212267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.341309 1212267 command_runner.go:130] > 3ec20f2e
	I0731 23:16:26.341423 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:16:26.351997 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:16:26.363680 1212267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.368783 1212267 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.368839 1212267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.368883 1212267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.375879 1212267 command_runner.go:130] > b5213941
	I0731 23:16:26.375979 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:16:26.387116 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 23:16:26.399039 1212267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.403860 1212267 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.403918 1212267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.403965 1212267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.410052 1212267 command_runner.go:130] > 51391683
	I0731 23:16:26.410171 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 23:16:26.420668 1212267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:16:26.425669 1212267 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:16:26.425698 1212267 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 23:16:26.425704 1212267 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0731 23:16:26.425710 1212267 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:16:26.425717 1212267 command_runner.go:130] > Access: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425722 1212267 command_runner.go:130] > Modify: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425726 1212267 command_runner.go:130] > Change: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425731 1212267 command_runner.go:130] >  Birth: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425786 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:16:26.431804 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.431909 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:16:26.437838 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.437945 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:16:26.443873 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.443952 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:16:26.449972 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.450062 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:16:26.456015 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.456131 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:16:26.462039 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.462120 1212267 kubeadm.go:392] StartCluster: {Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:16:26.462268 1212267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:16:26.462336 1212267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:16:26.498343 1212267 command_runner.go:130] > 9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3
	I0731 23:16:26.498380 1212267 command_runner.go:130] > 1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7
	I0731 23:16:26.498390 1212267 command_runner.go:130] > 4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6
	I0731 23:16:26.498401 1212267 command_runner.go:130] > 3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638
	I0731 23:16:26.498409 1212267 command_runner.go:130] > d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8
	I0731 23:16:26.498417 1212267 command_runner.go:130] > 06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d
	I0731 23:16:26.498425 1212267 command_runner.go:130] > c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f
	I0731 23:16:26.498436 1212267 command_runner.go:130] > 2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625
	I0731 23:16:26.499982 1212267 cri.go:89] found id: "9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3"
	I0731 23:16:26.500004 1212267 cri.go:89] found id: "1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7"
	I0731 23:16:26.500010 1212267 cri.go:89] found id: "4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6"
	I0731 23:16:26.500015 1212267 cri.go:89] found id: "3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638"
	I0731 23:16:26.500019 1212267 cri.go:89] found id: "d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8"
	I0731 23:16:26.500024 1212267 cri.go:89] found id: "06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d"
	I0731 23:16:26.500028 1212267 cri.go:89] found id: "c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f"
	I0731 23:16:26.500032 1212267 cri.go:89] found id: "2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625"
	I0731 23:16:26.500035 1212267 cri.go:89] found id: ""
	I0731 23:16:26.500108 1212267 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.953965215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722467888953941343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8fae49d-03f3-42c3-a8f7-ab06d97c5633 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.954617524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=414f2e03-b273-4751-889b-03189d9daa6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.954671533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=414f2e03-b273-4751-889b-03189d9daa6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.955059884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=414f2e03-b273-4751-889b-03189d9daa6b name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.997825801Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9cd2a2c-0e85-4b6d-98cf-83b9e5cdc8fd name=/runtime.v1.RuntimeService/Version
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.997917971Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9cd2a2c-0e85-4b6d-98cf-83b9e5cdc8fd name=/runtime.v1.RuntimeService/Version
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.999085745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9fec076-c4ee-48f9-8d43-647de22d144c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:08 multinode-615814 crio[2851]: time="2024-07-31 23:18:08.999793331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722467888999739632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9fec076-c4ee-48f9-8d43-647de22d144c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.000602883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51016b7a-2614-41fd-aae2-0f3a48522772 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.000703749Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51016b7a-2614-41fd-aae2-0f3a48522772 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.001190531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51016b7a-2614-41fd-aae2-0f3a48522772 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.047213700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=10957989-753f-4d95-bebd-4baa91ca58be name=/runtime.v1.RuntimeService/Version
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.047363512Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10957989-753f-4d95-bebd-4baa91ca58be name=/runtime.v1.RuntimeService/Version
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.048699947Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6ea5030-80bb-43c6-b6b8-0b11970fc8e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.049144264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722467889049120942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6ea5030-80bb-43c6-b6b8-0b11970fc8e3 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.049662910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0dafc0df-454d-4da0-af58-09d3916a9ffe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.049734396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0dafc0df-454d-4da0-af58-09d3916a9ffe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.050064729Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0dafc0df-454d-4da0-af58-09d3916a9ffe name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.091698293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a90d755b-db89-4845-a21f-bd76fc3fb9f8 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.091780914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a90d755b-db89-4845-a21f-bd76fc3fb9f8 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.092916860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0fc5357b-0855-4512-a805-472749419cb7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.093406376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722467889093378461,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fc5357b-0855-4512-a805-472749419cb7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.093867685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1d1fea2-a16e-42ef-8b52-ded099f3c299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.093954291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1d1fea2-a16e-42ef-8b52-ded099f3c299 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:18:09 multinode-615814 crio[2851]: time="2024-07-31 23:18:09.094355747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1d1fea2-a16e-42ef-8b52-ded099f3c299 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	634238cea87df       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   081e7c7fabc31       busybox-fc5497c4f-csqxw
	b62a1c59ee059       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   c70b70cbb244a       coredns-7db6d8ff4d-qnjmk
	ceb39a1b6f6d9       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   407378a1ef10a       kindnet-hmtpd
	81b413f3a1447       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   9bf2a3208a6bf       kube-proxy-kgb6k
	d2442166c3707       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   2735a821fa6f4       storage-provisioner
	813a6c631fb5d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   fbb8cea1e7e75       kube-controller-manager-multinode-615814
	287f021b9eafa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   87e81de4f3188       etcd-multinode-615814
	67726bf0c8778       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   cbf1afd582579       kube-apiserver-multinode-615814
	600c3b77043c1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   7889254fb9ddd       kube-scheduler-multinode-615814
	b0198443caaec       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   dd0e7cd817ff0       busybox-fc5497c4f-csqxw
	9416bbb6bdebf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   4db4d8ca82c04       coredns-7db6d8ff4d-qnjmk
	1f0ff197fe4e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   6de822223e246       storage-provisioner
	4dbba9426fe30       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   2c2f786f39b7b       kindnet-hmtpd
	3b0d3881582de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   87fd03202426c       kube-proxy-kgb6k
	d9f554e5a9b24       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   41204e620933f       kube-scheduler-multinode-615814
	06d82efdc6cac       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   028928f1ce584       kube-controller-manager-multinode-615814
	c72466f6d47ef       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   c02cee6317a3d       kube-apiserver-multinode-615814
	2b8097a110225       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   c4e3ca8ab9201       etcd-multinode-615814
	
	
	==> coredns [9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3] <==
	[INFO] 10.244.1.2:37763 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001977813s
	[INFO] 10.244.1.2:41883 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126602s
	[INFO] 10.244.1.2:42644 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077367s
	[INFO] 10.244.1.2:41581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001418364s
	[INFO] 10.244.1.2:53576 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091426s
	[INFO] 10.244.1.2:38644 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173573s
	[INFO] 10.244.1.2:34235 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091008s
	[INFO] 10.244.0.3:59285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166399s
	[INFO] 10.244.0.3:53189 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060775s
	[INFO] 10.244.0.3:58617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050319s
	[INFO] 10.244.0.3:56987 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102267s
	[INFO] 10.244.1.2:42379 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018641s
	[INFO] 10.244.1.2:45222 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098353s
	[INFO] 10.244.1.2:34766 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159215s
	[INFO] 10.244.1.2:36921 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009059s
	[INFO] 10.244.0.3:39447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167401s
	[INFO] 10.244.0.3:46406 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130046s
	[INFO] 10.244.0.3:58958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123333s
	[INFO] 10.244.0.3:47234 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100538s
	[INFO] 10.244.1.2:52581 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208333s
	[INFO] 10.244.1.2:46479 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098191s
	[INFO] 10.244.1.2:34113 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126232s
	[INFO] 10.244.1.2:48953 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50899 - 41257 "HINFO IN 920521463057196509.9166619468331811298. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.066323696s
	
	
	==> describe nodes <==
	Name:               multinode-615814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-615814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-615814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_09_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:09:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-615814
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:09:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:09:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:09:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    multinode-615814
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac524db6e09f4202881a55f034e78507
	  System UUID:                ac524db6-e09f-4202-881a-55f034e78507
	  Boot ID:                    fc4f4b6e-22ae-48c7-9dc9-7666b57c3854
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-csqxw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m12s
	  kube-system                 coredns-7db6d8ff4d-qnjmk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m21s
	  kube-system                 etcd-multinode-615814                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m36s
	  kube-system                 kindnet-hmtpd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m22s
	  kube-system                 kube-apiserver-multinode-615814             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-controller-manager-multinode-615814    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m35s
	  kube-system                 kube-proxy-kgb6k                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m22s
	  kube-system                 kube-scheduler-multinode-615814             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m36s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m19s                  kube-proxy       
	  Normal  Starting                 96s                    kube-proxy       
	  Normal  Starting                 8m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m41s (x6 over 8m41s)  kubelet          Node multinode-615814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m41s (x6 over 8m41s)  kubelet          Node multinode-615814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m41s (x5 over 8m41s)  kubelet          Node multinode-615814 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet          Node multinode-615814 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet          Node multinode-615814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m35s                  kubelet          Node multinode-615814 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m22s                  node-controller  Node multinode-615814 event: Registered Node multinode-615814 in Controller
	  Normal  NodeReady                8m6s                   kubelet          Node multinode-615814 status is now: NodeReady
	  Normal  Starting                 101s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)    kubelet          Node multinode-615814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)    kubelet          Node multinode-615814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x7 over 101s)    kubelet          Node multinode-615814 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                    node-controller  Node multinode-615814 event: Registered Node multinode-615814 in Controller
	
	
	Name:               multinode-615814-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-615814-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-615814
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T23_17_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:17:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-615814-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:17:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:17:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:17:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:17:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:17:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    multinode-615814-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef74021fba6b4e7588591b3dc5e480db
	  System UUID:                ef74021f-ba6b-4e75-8859-1b3dc5e480db
	  Boot ID:                    76250e4c-9596-4486-8cbe-9b2c54afb1f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zxdtw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-flflz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m33s
	  kube-system                 kube-proxy-swdtj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m29s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m34s (x2 over 7m34s)  kubelet     Node multinode-615814-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x2 over 7m34s)  kubelet     Node multinode-615814-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x2 over 7m34s)  kubelet     Node multinode-615814-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m33s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m14s                  kubelet     Node multinode-615814-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)      kubelet     Node multinode-615814-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)      kubelet     Node multinode-615814-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)      kubelet     Node multinode-615814-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  61s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-615814-m02 status is now: NodeReady
	
	
	Name:               multinode-615814-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-615814-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-615814
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T23_17_47_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:17:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-615814-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:18:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:18:06 +0000   Wed, 31 Jul 2024 23:17:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:18:06 +0000   Wed, 31 Jul 2024 23:17:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:18:06 +0000   Wed, 31 Jul 2024 23:17:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:18:06 +0000   Wed, 31 Jul 2024 23:18:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.72
	  Hostname:    multinode-615814-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1a28375d32e84c17864171505fec5e2a
	  System UUID:                1a28375d-32e8-4c17-8641-71505fec5e2a
	  Boot ID:                    74703c6d-b1a7-4304-9d62-b80e7e0c7d7d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-l8qmm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m38s
	  kube-system                 kube-proxy-h6lcx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m32s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m42s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet     Node multinode-615814-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet     Node multinode-615814-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet     Node multinode-615814-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet     Node multinode-615814-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m48s (x2 over 5m48s)  kubelet     Node multinode-615814-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m48s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m48s (x2 over 5m48s)  kubelet     Node multinode-615814-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m48s (x2 over 5m48s)  kubelet     Node multinode-615814-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-615814-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet     Node multinode-615814-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet     Node multinode-615814-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet     Node multinode-615814-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-615814-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.169142] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.160559] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.292502] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.387556] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.057539] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.921147] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.075003] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.519215] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.092929] kauditd_printk_skb: 43 callbacks suppressed
	[ +13.510325] systemd-fstab-generator[1460]: Ignoring "noauto" option for root device
	[  +0.139899] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 23:10] kauditd_printk_skb: 60 callbacks suppressed
	[ +54.439278] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 23:16] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.173017] systemd-fstab-generator[2783]: Ignoring "noauto" option for root device
	[  +0.197267] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.151393] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.307205] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +7.167374] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[  +0.080471] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.013474] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +4.673070] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.356148] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +0.089932] kauditd_printk_skb: 32 callbacks suppressed
	[Jul31 23:17] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983] <==
	{"level":"info","ts":"2024-07-31T23:16:29.271591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:16:29.273717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:16:29.29166Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:16:29.302547Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:16:29.30343Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:16:29.315431Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:16:29.315501Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:16:30.418104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:16:30.418168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:16:30.418208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-07-31T23:16:30.418222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.418228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.418236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.418253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.42551Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:multinode-615814 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:16:30.425571Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:16:30.425851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:16:30.426164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:16:30.426206Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:16:30.42746Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:16:30.427911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2024-07-31T23:17:12.611533Z","caller":"traceutil/trace.go:171","msg":"trace[394771171] linearizableReadLoop","detail":"{readStateIndex:1183; appliedIndex:1182; }","duration":"127.528551ms","start":"2024-07-31T23:17:12.483982Z","end":"2024-07-31T23:17:12.611511Z","steps":["trace[394771171] 'read index received'  (duration: 127.26503ms)","trace[394771171] 'applied index is now lower than readState.Index'  (duration: 262.65µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:17:12.611761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.746049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-615814-m02\" ","response":"range_response_count:1 size:3117"}
	{"level":"info","ts":"2024-07-31T23:17:12.611839Z","caller":"traceutil/trace.go:171","msg":"trace[156676855] range","detail":"{range_begin:/registry/minions/multinode-615814-m02; range_end:; response_count:1; response_revision:1065; }","duration":"127.903914ms","start":"2024-07-31T23:17:12.483924Z","end":"2024-07-31T23:17:12.611828Z","steps":["trace[156676855] 'agreement among raft nodes before linearized reading'  (duration: 127.694397ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:17:12.613764Z","caller":"traceutil/trace.go:171","msg":"trace[593072095] transaction","detail":"{read_only:false; response_revision:1065; number_of_response:1; }","duration":"142.945583ms","start":"2024-07-31T23:17:12.47079Z","end":"2024-07-31T23:17:12.613736Z","steps":["trace[593072095] 'process raft request'  (duration: 140.55982ms)"],"step_count":1}
	
	
	==> etcd [2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625] <==
	{"level":"info","ts":"2024-07-31T23:09:30.450376Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:09:30.454326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:09:30.45437Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:09:30.461106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2024-07-31T23:09:30.464014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:10:36.097887Z","caller":"traceutil/trace.go:171","msg":"trace[661407891] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:497; }","duration":"152.542786ms","start":"2024-07-31T23:10:35.94533Z","end":"2024-07-31T23:10:36.097873Z","steps":["trace[661407891] 'read index received'  (duration: 147.460813ms)","trace[661407891] 'applied index is now lower than readState.Index'  (duration: 5.081525ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:10:36.098671Z","caller":"traceutil/trace.go:171","msg":"trace[46459963] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"219.287745ms","start":"2024-07-31T23:10:35.879366Z","end":"2024-07-31T23:10:36.098654Z","steps":["trace[46459963] 'process raft request'  (duration: 213.242999ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:10:36.099072Z","caller":"traceutil/trace.go:171","msg":"trace[366714090] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"173.643089ms","start":"2024-07-31T23:10:35.925419Z","end":"2024-07-31T23:10:36.099062Z","steps":["trace[366714090] 'process raft request'  (duration: 172.375954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T23:10:36.099175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.842759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-615814-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-31T23:10:36.099229Z","caller":"traceutil/trace.go:171","msg":"trace[1850576568] range","detail":"{range_begin:/registry/minions/multinode-615814-m02; range_end:; response_count:1; response_revision:474; }","duration":"153.930297ms","start":"2024-07-31T23:10:35.94529Z","end":"2024-07-31T23:10:36.099221Z","steps":["trace[1850576568] 'agreement among raft nodes before linearized reading'  (duration: 153.847328ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:10:39.302709Z","caller":"traceutil/trace.go:171","msg":"trace[1596659419] linearizableReadLoop","detail":"{readStateIndex:538; appliedIndex:537; }","duration":"111.478194ms","start":"2024-07-31T23:10:39.191209Z","end":"2024-07-31T23:10:39.302687Z","steps":["trace[1596659419] 'read index received'  (duration: 30.9481ms)","trace[1596659419] 'applied index is now lower than readState.Index'  (duration: 80.528957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:10:39.302864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.633334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-615814-m02\" ","response":"range_response_count:1 size:3228"}
	{"level":"info","ts":"2024-07-31T23:10:39.302901Z","caller":"traceutil/trace.go:171","msg":"trace[714340579] range","detail":"{range_begin:/registry/minions/multinode-615814-m02; range_end:; response_count:1; response_revision:506; }","duration":"111.711513ms","start":"2024-07-31T23:10:39.191178Z","end":"2024-07-31T23:10:39.30289Z","steps":["trace[714340579] 'agreement among raft nodes before linearized reading'  (duration: 111.60822ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:11:31.003733Z","caller":"traceutil/trace.go:171","msg":"trace[795890352] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"229.11638ms","start":"2024-07-31T23:11:30.774595Z","end":"2024-07-31T23:11:31.003711Z","steps":["trace[795890352] 'process raft request'  (duration: 216.779027ms)","trace[795890352] 'compare'  (duration: 11.965369ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:11:31.004013Z","caller":"traceutil/trace.go:171","msg":"trace[188594877] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"160.762895ms","start":"2024-07-31T23:11:30.843238Z","end":"2024-07-31T23:11:31.004001Z","steps":["trace[188594877] 'process raft request'  (duration: 160.247693ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:14:46.661101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T23:14:46.661151Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-615814","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"]}
	{"level":"warn","ts":"2024-07-31T23:14:46.662632Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T23:14:46.664863Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T23:14:46.699233Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.129:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T23:14:46.699439Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.129:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T23:14:46.699581Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"245a8df1c58de0e1","current-leader-member-id":"245a8df1c58de0e1"}
	{"level":"info","ts":"2024-07-31T23:14:46.702793Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:14:46.70312Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:14:46.703167Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-615814","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"]}
	
	
	==> kernel <==
	 23:18:09 up 9 min,  0 users,  load average: 0.28, 0.18, 0.09
	Linux multinode-615814 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6] <==
	I0731 23:14:02.820888       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:12.814213       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:12.814298       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:12.814431       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:12.814454       1 main.go:299] handling current node
	I0731 23:14:12.814466       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:12.814471       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:22.821376       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:22.821482       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:22.821630       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:22.821653       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:22.821709       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:22.821731       1 main.go:299] handling current node
	I0731 23:14:32.821441       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:32.821549       1 main.go:299] handling current node
	I0731 23:14:32.821577       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:32.821595       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:32.821745       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:32.821783       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:42.821452       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:42.821498       1 main.go:299] handling current node
	I0731 23:14:42.821513       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:42.821519       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:42.821643       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:42.821665       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48] <==
	I0731 23:17:23.833928       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:17:33.833027       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:17:33.833134       1 main.go:299] handling current node
	I0731 23:17:33.833162       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:17:33.833180       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:17:33.833394       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:17:33.833428       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:17:43.833448       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:17:43.833588       1 main.go:299] handling current node
	I0731 23:17:43.833622       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:17:43.833641       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:17:43.833835       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:17:43.833874       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:17:53.834970       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:17:53.835022       1 main.go:299] handling current node
	I0731 23:17:53.835041       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:17:53.835046       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:17:53.835173       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:17:53.835191       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.2.0/24] 
	I0731 23:18:03.837840       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:18:03.837940       1 main.go:299] handling current node
	I0731 23:18:03.837969       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:18:03.837988       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:18:03.838162       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:18:03.838190       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542] <==
	I0731 23:16:31.728580       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:16:31.728612       1 policy_source.go:224] refreshing policies
	I0731 23:16:31.746705       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:16:31.752117       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 23:16:31.752454       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 23:16:31.752655       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 23:16:31.752845       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:16:31.758465       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 23:16:31.760248       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 23:16:31.772499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:16:31.782448       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 23:16:31.783020       1 aggregator.go:165] initial CRD sync complete...
	I0731 23:16:31.783098       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 23:16:31.783124       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 23:16:31.783176       1 cache.go:39] Caches are synced for autoregister controller
	E0731 23:16:31.796414       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 23:16:31.814553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 23:16:32.665481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:16:34.209943       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:16:34.339420       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:16:34.352426       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:16:34.432058       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:16:34.439941       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:16:44.919085       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:16:44.943595       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f] <==
	W0731 23:14:46.690731       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690793       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690863       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690921       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690978       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691032       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691084       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691138       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691192       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691245       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691378       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691487       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691547       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691601       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691669       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691789       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.692184       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.692680       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.692761       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.695566       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.695673       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0731 23:14:46.695887       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 23:14:46.695993       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0731 23:14:46.696027       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0731 23:14:46.701405       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	
	
	==> kube-controller-manager [06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d] <==
	I0731 23:10:07.143210       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0731 23:10:36.099884       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m02\" does not exist"
	I0731 23:10:36.116950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:10:37.149225       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-615814-m02"
	I0731 23:10:55.260610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:10:57.823950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.143041ms"
	I0731 23:10:57.843692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.674306ms"
	I0731 23:10:57.843795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.592µs"
	I0731 23:11:00.249077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.012236ms"
	I0731 23:11:00.249153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.255µs"
	I0731 23:11:00.945211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.568427ms"
	I0731 23:11:00.945454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.274µs"
	I0731 23:11:31.006553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:11:31.006729       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m03\" does not exist"
	I0731 23:11:31.020376       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m03" podCIDRs=["10.244.2.0/24"]
	I0731 23:11:32.174646       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-615814-m03"
	I0731 23:11:51.511451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m03"
	I0731 23:12:20.613013       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:12:21.939307       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m03\" does not exist"
	I0731 23:12:21.939884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:12:21.948818       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m03" podCIDRs=["10.244.3.0/24"]
	I0731 23:12:40.970256       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:13:22.230590       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m03"
	I0731 23:13:22.277061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.394658ms"
	I0731 23:13:22.277244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.294µs"
	
	
	==> kube-controller-manager [813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c] <==
	I0731 23:16:44.975011       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 23:16:45.336645       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:16:45.336689       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 23:16:45.349745       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:17:04.252725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.057409ms"
	I0731 23:17:04.265913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.117249ms"
	I0731 23:17:04.266080       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.51µs"
	I0731 23:17:08.467673       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m02\" does not exist"
	I0731 23:17:08.482833       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:17:10.377959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.116µs"
	I0731 23:17:10.388708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.956µs"
	I0731 23:17:10.421410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.357µs"
	I0731 23:17:10.430567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.376µs"
	I0731 23:17:10.433806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.72µs"
	I0731 23:17:15.010362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.882µs"
	I0731 23:17:27.704046       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:17:27.726983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.686µs"
	I0731 23:17:27.745727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.829µs"
	I0731 23:17:30.678626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.462869ms"
	I0731 23:17:30.679605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.493µs"
	I0731 23:17:46.306683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:17:47.218640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:17:47.219340       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m03\" does not exist"
	I0731 23:17:47.230556       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m03" podCIDRs=["10.244.2.0/24"]
	I0731 23:18:06.153183       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	
	
	==> kube-proxy [3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638] <==
	I0731 23:09:50.082692       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:09:50.100344       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	I0731 23:09:50.133766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:09:50.133835       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:09:50.133854       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:09:50.137692       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:09:50.137986       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:09:50.138014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:09:50.140024       1 config.go:192] "Starting service config controller"
	I0731 23:09:50.140349       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:09:50.140450       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:09:50.140470       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:09:50.142037       1 config.go:319] "Starting node config controller"
	I0731 23:09:50.142073       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:09:50.241477       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:09:50.241547       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:09:50.242239       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae] <==
	I0731 23:16:33.116616       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:16:33.168065       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	I0731 23:16:33.230429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:16:33.230533       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:16:33.230551       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:16:33.235142       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:16:33.235675       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:16:33.235707       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:16:33.238463       1 config.go:192] "Starting service config controller"
	I0731 23:16:33.238522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:16:33.238551       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:16:33.238554       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:16:33.238964       1 config.go:319] "Starting node config controller"
	I0731 23:16:33.238998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:16:33.338642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:16:33.338684       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:16:33.339062       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45] <==
	I0731 23:16:29.870001       1 serving.go:380] Generated self-signed cert in-memory
	W0731 23:16:31.689802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 23:16:31.689841       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:16:31.689854       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 23:16:31.689860       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 23:16:31.732037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 23:16:31.732103       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:16:31.736119       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 23:16:31.736193       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 23:16:31.736997       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 23:16:31.739386       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 23:16:31.836416       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8] <==
	E0731 23:09:31.881375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:09:31.881404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:09:31.881430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:09:32.773909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 23:09:32.773954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 23:09:32.840077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:09:32.840141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:09:32.847139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 23:09:32.847191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 23:09:32.869147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:09:32.869356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:09:32.931687       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 23:09:32.932359       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:09:32.949325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:09:32.949428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:09:33.011897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:09:33.012023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:09:33.061385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:09:33.061523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:09:33.252994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:09:33.253095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:09:33.311360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:09:33.311461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0731 23:09:35.466831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 23:14:46.662012       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 23:16:31 multinode-615814 kubelet[3066]: I0731 23:16:31.800942    3066 kubelet_node_status.go:112] "Node was previously registered" node="multinode-615814"
	Jul 31 23:16:31 multinode-615814 kubelet[3066]: I0731 23:16:31.801046    3066 kubelet_node_status.go:76] "Successfully registered node" node="multinode-615814"
	Jul 31 23:16:31 multinode-615814 kubelet[3066]: I0731 23:16:31.802300    3066 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 23:16:31 multinode-615814 kubelet[3066]: I0731 23:16:31.803229    3066 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: E0731 23:16:32.027495    3066 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-615814\" already exists" pod="kube-system/kube-scheduler-multinode-615814"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.188371    3066 apiserver.go:52] "Watching apiserver"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.191636    3066 topology_manager.go:215] "Topology Admit Handler" podUID="a4a7743e-a0ac-46c9-b041-5c4e527bb96b" podNamespace="kube-system" podName="kindnet-hmtpd"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.192516    3066 topology_manager.go:215] "Topology Admit Handler" podUID="e3359694-2a08-4a1b-8a0a-3f2e12dca7cb" podNamespace="kube-system" podName="kube-proxy-kgb6k"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.193033    3066 topology_manager.go:215] "Topology Admit Handler" podUID="a37a98d7-a790-4ed5-b579-b1e797f76da4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qnjmk"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.193380    3066 topology_manager.go:215] "Topology Admit Handler" podUID="e2d9b360-8119-43cc-b5bb-a90064a3de8b" podNamespace="kube-system" podName="storage-provisioner"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.193494    3066 topology_manager.go:215] "Topology Admit Handler" podUID="d26553da-0087-42e4-896d-22b1f3a79f1d" podNamespace="default" podName="busybox-fc5497c4f-csqxw"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.201541    3066 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285116    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e2d9b360-8119-43cc-b5bb-a90064a3de8b-tmp\") pod \"storage-provisioner\" (UID: \"e2d9b360-8119-43cc-b5bb-a90064a3de8b\") " pod="kube-system/storage-provisioner"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285209    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a4a7743e-a0ac-46c9-b041-5c4e527bb96b-cni-cfg\") pod \"kindnet-hmtpd\" (UID: \"a4a7743e-a0ac-46c9-b041-5c4e527bb96b\") " pod="kube-system/kindnet-hmtpd"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285227    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4a7743e-a0ac-46c9-b041-5c4e527bb96b-xtables-lock\") pod \"kindnet-hmtpd\" (UID: \"a4a7743e-a0ac-46c9-b041-5c4e527bb96b\") " pod="kube-system/kindnet-hmtpd"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285241    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4a7743e-a0ac-46c9-b041-5c4e527bb96b-lib-modules\") pod \"kindnet-hmtpd\" (UID: \"a4a7743e-a0ac-46c9-b041-5c4e527bb96b\") " pod="kube-system/kindnet-hmtpd"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285296    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3359694-2a08-4a1b-8a0a-3f2e12dca7cb-xtables-lock\") pod \"kube-proxy-kgb6k\" (UID: \"e3359694-2a08-4a1b-8a0a-3f2e12dca7cb\") " pod="kube-system/kube-proxy-kgb6k"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285312    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3359694-2a08-4a1b-8a0a-3f2e12dca7cb-lib-modules\") pod \"kube-proxy-kgb6k\" (UID: \"e3359694-2a08-4a1b-8a0a-3f2e12dca7cb\") " pod="kube-system/kube-proxy-kgb6k"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: E0731 23:16:32.337641    3066 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-615814\" already exists" pod="kube-system/kube-apiserver-multinode-615814"
	Jul 31 23:16:38 multinode-615814 kubelet[3066]: I0731 23:16:38.942413    3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 23:17:28 multinode-615814 kubelet[3066]: E0731 23:17:28.279674    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 23:18:08.665635 1213805 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1172186/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-615814 -n multinode-615814
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-615814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 stop
E0731 23:19:53.720415 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615814 stop: exit status 82 (2m0.494238331s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-615814-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-615814 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615814 status: exit status 3 (18.765687606s)

                                                
                                                
-- stdout --
	multinode-615814
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-615814-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 23:20:32.192512 1214462 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	E0731 23:20:32.192559 1214462 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-615814 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-615814 -n multinode-615814
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-615814 logs -n 25: (1.526269278s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814:/home/docker/cp-test_multinode-615814-m02_multinode-615814.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814 sudo cat                                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | /home/docker/cp-test_multinode-615814-m02_multinode-615814.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m03:/home/docker/cp-test_multinode-615814-m02_multinode-615814-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814-m03 sudo cat                                   | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | /home/docker/cp-test_multinode-615814-m02_multinode-615814-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp testdata/cp-test.txt                                                | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:11 UTC | 31 Jul 24 23:11 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile4241457848/001/cp-test_multinode-615814-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814:/home/docker/cp-test_multinode-615814-m03_multinode-615814.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814 sudo cat                                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /home/docker/cp-test_multinode-615814-m03_multinode-615814.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m02:/home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814-m02 sudo cat                                   | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-615814 node stop m03                                                          | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	| node    | multinode-615814 node start                                                             | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC |                     |
	| stop    | -p multinode-615814                                                                     | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC |                     |
	| start   | -p multinode-615814                                                                     | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:14 UTC | 31 Jul 24 23:18 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC |                     |
	| node    | multinode-615814 node delete                                                            | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC | 31 Jul 24 23:18 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-615814 stop                                                                   | multinode-615814 | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:14:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:14:45.575980 1212267 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:14:45.576305 1212267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:14:45.576314 1212267 out.go:304] Setting ErrFile to fd 2...
	I0731 23:14:45.576319 1212267 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:14:45.576509 1212267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:14:45.577096 1212267 out.go:298] Setting JSON to false
	I0731 23:14:45.578141 1212267 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":25037,"bootTime":1722442649,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:14:45.578213 1212267 start.go:139] virtualization: kvm guest
	I0731 23:14:45.580269 1212267 out.go:177] * [multinode-615814] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:14:45.581764 1212267 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:14:45.581773 1212267 notify.go:220] Checking for updates...
	I0731 23:14:45.583543 1212267 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:14:45.584911 1212267 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:14:45.586277 1212267 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:14:45.587961 1212267 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:14:45.589489 1212267 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:14:45.591415 1212267 config.go:182] Loaded profile config "multinode-615814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:14:45.591551 1212267 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:14:45.592213 1212267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:14:45.592322 1212267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:14:45.608963 1212267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35253
	I0731 23:14:45.609459 1212267 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:14:45.610069 1212267 main.go:141] libmachine: Using API Version  1
	I0731 23:14:45.610095 1212267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:14:45.610483 1212267 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:14:45.610717 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:14:45.650425 1212267 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 23:14:45.651848 1212267 start.go:297] selected driver: kvm2
	I0731 23:14:45.651872 1212267 start.go:901] validating driver "kvm2" against &{Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:14:45.652052 1212267 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:14:45.652631 1212267 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:14:45.652743 1212267 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:14:45.670210 1212267 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:14:45.671371 1212267 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:14:45.671436 1212267 cni.go:84] Creating CNI manager for ""
	I0731 23:14:45.671448 1212267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:14:45.671545 1212267 start.go:340] cluster config:
	{Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-615814 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:14:45.671744 1212267 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:14:45.673569 1212267 out.go:177] * Starting "multinode-615814" primary control-plane node in "multinode-615814" cluster
	I0731 23:14:45.674808 1212267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:14:45.674858 1212267 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 23:14:45.674867 1212267 cache.go:56] Caching tarball of preloaded images
	I0731 23:14:45.675000 1212267 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 23:14:45.675013 1212267 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 23:14:45.675144 1212267 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/config.json ...
	I0731 23:14:45.675364 1212267 start.go:360] acquireMachinesLock for multinode-615814: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:14:45.675416 1212267 start.go:364] duration metric: took 29.372µs to acquireMachinesLock for "multinode-615814"
	I0731 23:14:45.675437 1212267 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:14:45.675446 1212267 fix.go:54] fixHost starting: 
	I0731 23:14:45.675715 1212267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:14:45.675762 1212267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:14:45.691755 1212267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I0731 23:14:45.692266 1212267 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:14:45.692826 1212267 main.go:141] libmachine: Using API Version  1
	I0731 23:14:45.692850 1212267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:14:45.693203 1212267 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:14:45.693433 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:14:45.693605 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetState
	I0731 23:14:45.695315 1212267 fix.go:112] recreateIfNeeded on multinode-615814: state=Running err=<nil>
	W0731 23:14:45.695354 1212267 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:14:45.697375 1212267 out.go:177] * Updating the running kvm2 "multinode-615814" VM ...
	I0731 23:14:45.698819 1212267 machine.go:94] provisionDockerMachine start ...
	I0731 23:14:45.698857 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:14:45.699223 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:45.702021 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.702548 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:45.702581 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.702823 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:45.703026 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.703211 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.703332 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:45.703511 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:45.703711 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:45.703722 1212267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:14:45.816732 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-615814
	
	I0731 23:14:45.816771 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetMachineName
	I0731 23:14:45.817066 1212267 buildroot.go:166] provisioning hostname "multinode-615814"
	I0731 23:14:45.817094 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetMachineName
	I0731 23:14:45.817300 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:45.820285 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.820665 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:45.820698 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.820831 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:45.821032 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.821220 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.821369 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:45.821584 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:45.821811 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:45.821826 1212267 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-615814 && echo "multinode-615814" | sudo tee /etc/hostname
	I0731 23:14:45.951924 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-615814
	
	I0731 23:14:45.951965 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:45.955086 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.955564 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:45.955622 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:45.955809 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:45.956044 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.956224 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:45.956354 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:45.956576 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:45.956802 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:45.956826 1212267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-615814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-615814/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-615814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:14:46.073405 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:14:46.073448 1212267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:14:46.073468 1212267 buildroot.go:174] setting up certificates
	I0731 23:14:46.073480 1212267 provision.go:84] configureAuth start
	I0731 23:14:46.073494 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetMachineName
	I0731 23:14:46.073802 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:14:46.076605 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.077074 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.077109 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.077347 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:46.079708 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.080115 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.080144 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.080324 1212267 provision.go:143] copyHostCerts
	I0731 23:14:46.080361 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:14:46.080395 1212267 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:14:46.080404 1212267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:14:46.080474 1212267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:14:46.080626 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:14:46.080649 1212267 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:14:46.080654 1212267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:14:46.080681 1212267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:14:46.080726 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:14:46.080742 1212267 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:14:46.080749 1212267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:14:46.080770 1212267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:14:46.080824 1212267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.multinode-615814 san=[127.0.0.1 192.168.39.129 localhost minikube multinode-615814]
	I0731 23:14:46.351568 1212267 provision.go:177] copyRemoteCerts
	I0731 23:14:46.351637 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:14:46.351664 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:46.354717 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.355162 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.355210 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.355390 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:46.355622 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:46.355806 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:46.355954 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:14:46.443839 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0731 23:14:46.443943 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:14:46.471190 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0731 23:14:46.471276 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:14:46.497796 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0731 23:14:46.497879 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0731 23:14:46.524609 1212267 provision.go:87] duration metric: took 451.104502ms to configureAuth
	I0731 23:14:46.524647 1212267 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:14:46.524948 1212267 config.go:182] Loaded profile config "multinode-615814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:14:46.525044 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:14:46.527677 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.528107 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:14:46.528143 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:14:46.528346 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:14:46.528595 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:46.528782 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:14:46.528957 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:14:46.529168 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:14:46.529343 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:14:46.529358 1212267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:16:17.247478 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:16:17.247520 1212267 machine.go:97] duration metric: took 1m31.548678578s to provisionDockerMachine
	I0731 23:16:17.247539 1212267 start.go:293] postStartSetup for "multinode-615814" (driver="kvm2")
	I0731 23:16:17.247550 1212267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:16:17.247569 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.247943 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:16:17.247982 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.251499 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.252068 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.252113 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.252294 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.252551 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.252757 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.252950 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:16:17.339522 1212267 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:16:17.343736 1212267 command_runner.go:130] > NAME=Buildroot
	I0731 23:16:17.343768 1212267 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0731 23:16:17.343773 1212267 command_runner.go:130] > ID=buildroot
	I0731 23:16:17.343780 1212267 command_runner.go:130] > VERSION_ID=2023.02.9
	I0731 23:16:17.343785 1212267 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0731 23:16:17.343927 1212267 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:16:17.343960 1212267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:16:17.344134 1212267 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:16:17.344243 1212267 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:16:17.344257 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /etc/ssl/certs/11794002.pem
	I0731 23:16:17.344354 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:16:17.354628 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:16:17.379437 1212267 start.go:296] duration metric: took 131.879782ms for postStartSetup
	I0731 23:16:17.379497 1212267 fix.go:56] duration metric: took 1m31.704049881s for fixHost
	I0731 23:16:17.379531 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.382647 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.383049 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.383079 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.383347 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.383612 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.383822 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.383982 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.384215 1212267 main.go:141] libmachine: Using SSH client type: native
	I0731 23:16:17.384459 1212267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0731 23:16:17.384500 1212267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:16:17.497032 1212267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722467777.471916767
	
	I0731 23:16:17.497058 1212267 fix.go:216] guest clock: 1722467777.471916767
	I0731 23:16:17.497066 1212267 fix.go:229] Guest: 2024-07-31 23:16:17.471916767 +0000 UTC Remote: 2024-07-31 23:16:17.379503265 +0000 UTC m=+91.846296835 (delta=92.413502ms)
	I0731 23:16:17.497089 1212267 fix.go:200] guest clock delta is within tolerance: 92.413502ms
	I0731 23:16:17.497096 1212267 start.go:83] releasing machines lock for "multinode-615814", held for 1m31.821667272s
	I0731 23:16:17.497117 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.497423 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:16:17.500425 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.500781 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.500825 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.501069 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.501673 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.501862 1212267 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:16:17.501946 1212267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:16:17.501992 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.502109 1212267 ssh_runner.go:195] Run: cat /version.json
	I0731 23:16:17.502137 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:16:17.505007 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505250 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505447 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.505478 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505648 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.505768 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:17.505797 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:17.505888 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.505977 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:16:17.506070 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.506138 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:16:17.506245 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:16:17.506420 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:16:17.506592 1212267 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:16:17.588883 1212267 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0731 23:16:17.589233 1212267 ssh_runner.go:195] Run: systemctl --version
	I0731 23:16:17.610253 1212267 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0731 23:16:17.610361 1212267 command_runner.go:130] > systemd 252 (252)
	I0731 23:16:17.610395 1212267 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0731 23:16:17.610456 1212267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:16:17.769476 1212267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0731 23:16:17.775462 1212267 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0731 23:16:17.775569 1212267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:16:17.775644 1212267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:16:17.785801 1212267 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 23:16:17.785846 1212267 start.go:495] detecting cgroup driver to use...
	I0731 23:16:17.785929 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:16:17.803844 1212267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:16:17.819207 1212267 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:16:17.819280 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:16:17.834207 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:16:17.849617 1212267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:16:18.008351 1212267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:16:18.174365 1212267 docker.go:233] disabling docker service ...
	I0731 23:16:18.174454 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:16:18.194867 1212267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:16:18.209982 1212267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:16:18.371294 1212267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:16:18.529279 1212267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:16:18.544828 1212267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:16:18.564833 1212267 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0731 23:16:18.565124 1212267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 23:16:18.565200 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.576844 1212267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:16:18.576930 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.588430 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.600037 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.611808 1212267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:16:18.623846 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.635691 1212267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.647137 1212267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:16:18.658692 1212267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:16:18.669166 1212267 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0731 23:16:18.669265 1212267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:16:18.679663 1212267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:16:18.826698 1212267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:16:25.528268 1212267 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.701521645s)
	I0731 23:16:25.528304 1212267 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:16:25.528354 1212267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:16:25.533387 1212267 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0731 23:16:25.533426 1212267 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0731 23:16:25.533433 1212267 command_runner.go:130] > Device: 0,22	Inode: 1344        Links: 1
	I0731 23:16:25.533440 1212267 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:16:25.533447 1212267 command_runner.go:130] > Access: 2024-07-31 23:16:25.387899896 +0000
	I0731 23:16:25.533457 1212267 command_runner.go:130] > Modify: 2024-07-31 23:16:25.387899896 +0000
	I0731 23:16:25.533464 1212267 command_runner.go:130] > Change: 2024-07-31 23:16:25.387899896 +0000
	I0731 23:16:25.533469 1212267 command_runner.go:130] >  Birth: -
	I0731 23:16:25.533653 1212267 start.go:563] Will wait 60s for crictl version
	I0731 23:16:25.533718 1212267 ssh_runner.go:195] Run: which crictl
	I0731 23:16:25.537978 1212267 command_runner.go:130] > /usr/bin/crictl
	I0731 23:16:25.538063 1212267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:16:25.575186 1212267 command_runner.go:130] > Version:  0.1.0
	I0731 23:16:25.575216 1212267 command_runner.go:130] > RuntimeName:  cri-o
	I0731 23:16:25.575223 1212267 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0731 23:16:25.575230 1212267 command_runner.go:130] > RuntimeApiVersion:  v1
	I0731 23:16:25.576443 1212267 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 23:16:25.576523 1212267 ssh_runner.go:195] Run: crio --version
	I0731 23:16:25.606483 1212267 command_runner.go:130] > crio version 1.29.1
	I0731 23:16:25.606517 1212267 command_runner.go:130] > Version:        1.29.1
	I0731 23:16:25.606524 1212267 command_runner.go:130] > GitCommit:      unknown
	I0731 23:16:25.606529 1212267 command_runner.go:130] > GitCommitDate:  unknown
	I0731 23:16:25.606533 1212267 command_runner.go:130] > GitTreeState:   clean
	I0731 23:16:25.606538 1212267 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 23:16:25.606542 1212267 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 23:16:25.606546 1212267 command_runner.go:130] > Compiler:       gc
	I0731 23:16:25.606550 1212267 command_runner.go:130] > Platform:       linux/amd64
	I0731 23:16:25.606555 1212267 command_runner.go:130] > Linkmode:       dynamic
	I0731 23:16:25.606559 1212267 command_runner.go:130] > BuildTags:      
	I0731 23:16:25.606564 1212267 command_runner.go:130] >   containers_image_ostree_stub
	I0731 23:16:25.606570 1212267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 23:16:25.606574 1212267 command_runner.go:130] >   btrfs_noversion
	I0731 23:16:25.606578 1212267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 23:16:25.606583 1212267 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 23:16:25.606588 1212267 command_runner.go:130] >   seccomp
	I0731 23:16:25.606593 1212267 command_runner.go:130] > LDFlags:          unknown
	I0731 23:16:25.606600 1212267 command_runner.go:130] > SeccompEnabled:   true
	I0731 23:16:25.606607 1212267 command_runner.go:130] > AppArmorEnabled:  false
	I0731 23:16:25.606719 1212267 ssh_runner.go:195] Run: crio --version
	I0731 23:16:25.635621 1212267 command_runner.go:130] > crio version 1.29.1
	I0731 23:16:25.635649 1212267 command_runner.go:130] > Version:        1.29.1
	I0731 23:16:25.635655 1212267 command_runner.go:130] > GitCommit:      unknown
	I0731 23:16:25.635660 1212267 command_runner.go:130] > GitCommitDate:  unknown
	I0731 23:16:25.635663 1212267 command_runner.go:130] > GitTreeState:   clean
	I0731 23:16:25.635669 1212267 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0731 23:16:25.635673 1212267 command_runner.go:130] > GoVersion:      go1.21.6
	I0731 23:16:25.635676 1212267 command_runner.go:130] > Compiler:       gc
	I0731 23:16:25.635681 1212267 command_runner.go:130] > Platform:       linux/amd64
	I0731 23:16:25.635685 1212267 command_runner.go:130] > Linkmode:       dynamic
	I0731 23:16:25.635690 1212267 command_runner.go:130] > BuildTags:      
	I0731 23:16:25.635694 1212267 command_runner.go:130] >   containers_image_ostree_stub
	I0731 23:16:25.635738 1212267 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0731 23:16:25.635751 1212267 command_runner.go:130] >   btrfs_noversion
	I0731 23:16:25.635759 1212267 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0731 23:16:25.635768 1212267 command_runner.go:130] >   libdm_no_deferred_remove
	I0731 23:16:25.635774 1212267 command_runner.go:130] >   seccomp
	I0731 23:16:25.635783 1212267 command_runner.go:130] > LDFlags:          unknown
	I0731 23:16:25.635789 1212267 command_runner.go:130] > SeccompEnabled:   true
	I0731 23:16:25.635796 1212267 command_runner.go:130] > AppArmorEnabled:  false
	I0731 23:16:25.637862 1212267 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 23:16:25.639022 1212267 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:16:25.641964 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:25.642491 1212267 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:16:25.642521 1212267 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:16:25.642810 1212267 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 23:16:25.647247 1212267 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0731 23:16:25.647398 1212267 kubeadm.go:883] updating cluster {Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:16:25.647593 1212267 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:16:25.647689 1212267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:16:25.691330 1212267 command_runner.go:130] > {
	I0731 23:16:25.691358 1212267 command_runner.go:130] >   "images": [
	I0731 23:16:25.691362 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691370 1212267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 23:16:25.691374 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691380 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 23:16:25.691387 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691391 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691401 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 23:16:25.691408 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 23:16:25.691411 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691416 1212267 command_runner.go:130] >       "size": "87165492",
	I0731 23:16:25.691419 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691426 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691443 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691447 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691451 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691454 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691461 1212267 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 23:16:25.691468 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691473 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 23:16:25.691476 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691480 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691487 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 23:16:25.691497 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 23:16:25.691500 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691504 1212267 command_runner.go:130] >       "size": "87174707",
	I0731 23:16:25.691508 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691525 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691529 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691532 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691536 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691539 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691544 1212267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 23:16:25.691548 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691553 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 23:16:25.691557 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691561 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691568 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 23:16:25.691575 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 23:16:25.691581 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691586 1212267 command_runner.go:130] >       "size": "1363676",
	I0731 23:16:25.691589 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691594 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691598 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691602 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691605 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691611 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691616 1212267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 23:16:25.691621 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691627 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 23:16:25.691631 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691636 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691643 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 23:16:25.691656 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 23:16:25.691662 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691666 1212267 command_runner.go:130] >       "size": "31470524",
	I0731 23:16:25.691670 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691674 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691680 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691683 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691687 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691693 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691699 1212267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 23:16:25.691705 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691709 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 23:16:25.691716 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691719 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691728 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 23:16:25.691735 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 23:16:25.691740 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691744 1212267 command_runner.go:130] >       "size": "61245718",
	I0731 23:16:25.691755 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.691761 1212267 command_runner.go:130] >       "username": "nonroot",
	I0731 23:16:25.691765 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691772 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691775 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691781 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691787 1212267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 23:16:25.691793 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691798 1212267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 23:16:25.691804 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691808 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691815 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 23:16:25.691824 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 23:16:25.691830 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691835 1212267 command_runner.go:130] >       "size": "150779692",
	I0731 23:16:25.691840 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.691843 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.691847 1212267 command_runner.go:130] >       },
	I0731 23:16:25.691852 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691855 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691862 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691865 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691871 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691876 1212267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 23:16:25.691882 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691887 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 23:16:25.691892 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691896 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.691906 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 23:16:25.691915 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 23:16:25.691920 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691924 1212267 command_runner.go:130] >       "size": "117609954",
	I0731 23:16:25.691929 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.691933 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.691939 1212267 command_runner.go:130] >       },
	I0731 23:16:25.691942 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.691948 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.691951 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.691957 1212267 command_runner.go:130] >     },
	I0731 23:16:25.691960 1212267 command_runner.go:130] >     {
	I0731 23:16:25.691968 1212267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 23:16:25.691974 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.691979 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 23:16:25.691985 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.691989 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692005 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 23:16:25.692015 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 23:16:25.692021 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692026 1212267 command_runner.go:130] >       "size": "112198984",
	I0731 23:16:25.692032 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.692036 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.692040 1212267 command_runner.go:130] >       },
	I0731 23:16:25.692043 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692047 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692050 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.692053 1212267 command_runner.go:130] >     },
	I0731 23:16:25.692057 1212267 command_runner.go:130] >     {
	I0731 23:16:25.692062 1212267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 23:16:25.692065 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.692070 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 23:16:25.692073 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692077 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692084 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 23:16:25.692107 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 23:16:25.692111 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692115 1212267 command_runner.go:130] >       "size": "85953945",
	I0731 23:16:25.692118 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.692122 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692126 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692130 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.692133 1212267 command_runner.go:130] >     },
	I0731 23:16:25.692136 1212267 command_runner.go:130] >     {
	I0731 23:16:25.692141 1212267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 23:16:25.692144 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.692149 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 23:16:25.692152 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692156 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692162 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 23:16:25.692169 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 23:16:25.692172 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692178 1212267 command_runner.go:130] >       "size": "63051080",
	I0731 23:16:25.692182 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.692185 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.692188 1212267 command_runner.go:130] >       },
	I0731 23:16:25.692192 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692196 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692199 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.692202 1212267 command_runner.go:130] >     },
	I0731 23:16:25.692206 1212267 command_runner.go:130] >     {
	I0731 23:16:25.692212 1212267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 23:16:25.692216 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.692221 1212267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 23:16:25.692226 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692230 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.692237 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 23:16:25.692246 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 23:16:25.692249 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.692253 1212267 command_runner.go:130] >       "size": "750414",
	I0731 23:16:25.692258 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.692262 1212267 command_runner.go:130] >         "value": "65535"
	I0731 23:16:25.692268 1212267 command_runner.go:130] >       },
	I0731 23:16:25.692272 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.692278 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.692282 1212267 command_runner.go:130] >       "pinned": true
	I0731 23:16:25.692288 1212267 command_runner.go:130] >     }
	I0731 23:16:25.692292 1212267 command_runner.go:130] >   ]
	I0731 23:16:25.692297 1212267 command_runner.go:130] > }
	I0731 23:16:25.693023 1212267 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:16:25.693046 1212267 crio.go:433] Images already preloaded, skipping extraction
	I0731 23:16:25.693102 1212267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:16:25.727290 1212267 command_runner.go:130] > {
	I0731 23:16:25.727316 1212267 command_runner.go:130] >   "images": [
	I0731 23:16:25.727322 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727335 1212267 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0731 23:16:25.727342 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727356 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0731 23:16:25.727362 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727368 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727380 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0731 23:16:25.727393 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0731 23:16:25.727408 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727417 1212267 command_runner.go:130] >       "size": "87165492",
	I0731 23:16:25.727425 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727432 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727444 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727454 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727461 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727466 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727472 1212267 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0731 23:16:25.727477 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727485 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0731 23:16:25.727491 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727499 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727511 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0731 23:16:25.727522 1212267 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0731 23:16:25.727532 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727538 1212267 command_runner.go:130] >       "size": "87174707",
	I0731 23:16:25.727545 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727557 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727565 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727574 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727579 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727588 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727599 1212267 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0731 23:16:25.727609 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727617 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0731 23:16:25.727625 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727632 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727646 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0731 23:16:25.727655 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0731 23:16:25.727665 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727676 1212267 command_runner.go:130] >       "size": "1363676",
	I0731 23:16:25.727685 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727691 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727701 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727711 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727721 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727734 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727744 1212267 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0731 23:16:25.727754 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727763 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0731 23:16:25.727772 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727778 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727793 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0731 23:16:25.727813 1212267 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0731 23:16:25.727820 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727825 1212267 command_runner.go:130] >       "size": "31470524",
	I0731 23:16:25.727830 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727835 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.727845 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727855 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727864 1212267 command_runner.go:130] >     },
	I0731 23:16:25.727870 1212267 command_runner.go:130] >     {
	I0731 23:16:25.727882 1212267 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0731 23:16:25.727891 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.727902 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0731 23:16:25.727908 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727912 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.727927 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0731 23:16:25.727943 1212267 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0731 23:16:25.727951 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.727958 1212267 command_runner.go:130] >       "size": "61245718",
	I0731 23:16:25.727967 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.727977 1212267 command_runner.go:130] >       "username": "nonroot",
	I0731 23:16:25.727986 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.727992 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.727995 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728004 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728018 1212267 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0731 23:16:25.728028 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728038 1212267 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0731 23:16:25.728044 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728056 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728070 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0731 23:16:25.728080 1212267 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0731 23:16:25.728104 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728114 1212267 command_runner.go:130] >       "size": "150779692",
	I0731 23:16:25.728122 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728129 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728137 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728147 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728156 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728165 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728174 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728180 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728192 1212267 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0731 23:16:25.728203 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728211 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0731 23:16:25.728220 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728230 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728244 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0731 23:16:25.728257 1212267 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0731 23:16:25.728262 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728269 1212267 command_runner.go:130] >       "size": "117609954",
	I0731 23:16:25.728278 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728288 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728294 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728303 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728311 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728320 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728329 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728337 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728343 1212267 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0731 23:16:25.728351 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728359 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0731 23:16:25.728368 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728375 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728399 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0731 23:16:25.728416 1212267 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0731 23:16:25.728424 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728430 1212267 command_runner.go:130] >       "size": "112198984",
	I0731 23:16:25.728437 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728444 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728453 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728460 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728466 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728474 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728482 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728488 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728501 1212267 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0731 23:16:25.728508 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728515 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0731 23:16:25.728519 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728526 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728540 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0731 23:16:25.728556 1212267 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0731 23:16:25.728564 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728574 1212267 command_runner.go:130] >       "size": "85953945",
	I0731 23:16:25.728583 1212267 command_runner.go:130] >       "uid": null,
	I0731 23:16:25.728592 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728599 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728603 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728611 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728619 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728633 1212267 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0731 23:16:25.728643 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728651 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0731 23:16:25.728660 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728667 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728680 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0731 23:16:25.728692 1212267 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0731 23:16:25.728701 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728711 1212267 command_runner.go:130] >       "size": "63051080",
	I0731 23:16:25.728720 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728747 1212267 command_runner.go:130] >         "value": "0"
	I0731 23:16:25.728756 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728763 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728770 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728774 1212267 command_runner.go:130] >       "pinned": false
	I0731 23:16:25.728782 1212267 command_runner.go:130] >     },
	I0731 23:16:25.728791 1212267 command_runner.go:130] >     {
	I0731 23:16:25.728803 1212267 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0731 23:16:25.728813 1212267 command_runner.go:130] >       "repoTags": [
	I0731 23:16:25.728822 1212267 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0731 23:16:25.728830 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728840 1212267 command_runner.go:130] >       "repoDigests": [
	I0731 23:16:25.728852 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0731 23:16:25.728863 1212267 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0731 23:16:25.728874 1212267 command_runner.go:130] >       ],
	I0731 23:16:25.728884 1212267 command_runner.go:130] >       "size": "750414",
	I0731 23:16:25.728893 1212267 command_runner.go:130] >       "uid": {
	I0731 23:16:25.728904 1212267 command_runner.go:130] >         "value": "65535"
	I0731 23:16:25.728913 1212267 command_runner.go:130] >       },
	I0731 23:16:25.728921 1212267 command_runner.go:130] >       "username": "",
	I0731 23:16:25.728931 1212267 command_runner.go:130] >       "spec": null,
	I0731 23:16:25.728938 1212267 command_runner.go:130] >       "pinned": true
	I0731 23:16:25.728941 1212267 command_runner.go:130] >     }
	I0731 23:16:25.728949 1212267 command_runner.go:130] >   ]
	I0731 23:16:25.728954 1212267 command_runner.go:130] > }
	I0731 23:16:25.729128 1212267 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:16:25.729145 1212267 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:16:25.729156 1212267 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.30.3 crio true true} ...
	I0731 23:16:25.729279 1212267 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-615814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:16:25.729370 1212267 ssh_runner.go:195] Run: crio config
	I0731 23:16:25.771287 1212267 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0731 23:16:25.771314 1212267 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0731 23:16:25.771325 1212267 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0731 23:16:25.771328 1212267 command_runner.go:130] > #
	I0731 23:16:25.771337 1212267 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0731 23:16:25.771347 1212267 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0731 23:16:25.771356 1212267 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0731 23:16:25.771366 1212267 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0731 23:16:25.771372 1212267 command_runner.go:130] > # reload'.
	I0731 23:16:25.771382 1212267 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0731 23:16:25.771396 1212267 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0731 23:16:25.771406 1212267 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0731 23:16:25.771415 1212267 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0731 23:16:25.771423 1212267 command_runner.go:130] > [crio]
	I0731 23:16:25.771433 1212267 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0731 23:16:25.771445 1212267 command_runner.go:130] > # containers images, in this directory.
	I0731 23:16:25.771452 1212267 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0731 23:16:25.771468 1212267 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0731 23:16:25.771478 1212267 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0731 23:16:25.771488 1212267 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0731 23:16:25.771498 1212267 command_runner.go:130] > # imagestore = ""
	I0731 23:16:25.771508 1212267 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0731 23:16:25.771518 1212267 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0731 23:16:25.771528 1212267 command_runner.go:130] > storage_driver = "overlay"
	I0731 23:16:25.771536 1212267 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0731 23:16:25.771547 1212267 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0731 23:16:25.771556 1212267 command_runner.go:130] > storage_option = [
	I0731 23:16:25.771566 1212267 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0731 23:16:25.771576 1212267 command_runner.go:130] > ]
	I0731 23:16:25.771586 1212267 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0731 23:16:25.771601 1212267 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0731 23:16:25.771610 1212267 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0731 23:16:25.771618 1212267 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0731 23:16:25.771633 1212267 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0731 23:16:25.771644 1212267 command_runner.go:130] > # always happen on a node reboot
	I0731 23:16:25.771656 1212267 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0731 23:16:25.771674 1212267 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0731 23:16:25.771687 1212267 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0731 23:16:25.771697 1212267 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0731 23:16:25.771705 1212267 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0731 23:16:25.771719 1212267 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0731 23:16:25.771733 1212267 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0731 23:16:25.771740 1212267 command_runner.go:130] > # internal_wipe = true
	I0731 23:16:25.771748 1212267 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0731 23:16:25.771757 1212267 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0731 23:16:25.771762 1212267 command_runner.go:130] > # internal_repair = false
	I0731 23:16:25.771775 1212267 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0731 23:16:25.771787 1212267 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0731 23:16:25.771799 1212267 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0731 23:16:25.771811 1212267 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0731 23:16:25.771821 1212267 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0731 23:16:25.771830 1212267 command_runner.go:130] > [crio.api]
	I0731 23:16:25.771839 1212267 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0731 23:16:25.771850 1212267 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0731 23:16:25.771859 1212267 command_runner.go:130] > # IP address on which the stream server will listen.
	I0731 23:16:25.771869 1212267 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0731 23:16:25.771880 1212267 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0731 23:16:25.771891 1212267 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0731 23:16:25.771900 1212267 command_runner.go:130] > # stream_port = "0"
	I0731 23:16:25.771909 1212267 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0731 23:16:25.771919 1212267 command_runner.go:130] > # stream_enable_tls = false
	I0731 23:16:25.771928 1212267 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0731 23:16:25.771938 1212267 command_runner.go:130] > # stream_idle_timeout = ""
	I0731 23:16:25.771948 1212267 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0731 23:16:25.771962 1212267 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0731 23:16:25.771971 1212267 command_runner.go:130] > # minutes.
	I0731 23:16:25.771978 1212267 command_runner.go:130] > # stream_tls_cert = ""
	I0731 23:16:25.771991 1212267 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0731 23:16:25.772003 1212267 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0731 23:16:25.772015 1212267 command_runner.go:130] > # stream_tls_key = ""
	I0731 23:16:25.772028 1212267 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0731 23:16:25.772043 1212267 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0731 23:16:25.772062 1212267 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0731 23:16:25.772072 1212267 command_runner.go:130] > # stream_tls_ca = ""
	I0731 23:16:25.772084 1212267 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 23:16:25.772111 1212267 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0731 23:16:25.772125 1212267 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0731 23:16:25.772133 1212267 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0731 23:16:25.772147 1212267 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0731 23:16:25.772158 1212267 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0731 23:16:25.772168 1212267 command_runner.go:130] > [crio.runtime]
	I0731 23:16:25.772178 1212267 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0731 23:16:25.772193 1212267 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0731 23:16:25.772203 1212267 command_runner.go:130] > # "nofile=1024:2048"
	I0731 23:16:25.772213 1212267 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0731 23:16:25.772222 1212267 command_runner.go:130] > # default_ulimits = [
	I0731 23:16:25.772227 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.772237 1212267 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0731 23:16:25.772247 1212267 command_runner.go:130] > # no_pivot = false
	I0731 23:16:25.772256 1212267 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0731 23:16:25.772269 1212267 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0731 23:16:25.772277 1212267 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0731 23:16:25.772290 1212267 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0731 23:16:25.772302 1212267 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0731 23:16:25.772314 1212267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 23:16:25.772325 1212267 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0731 23:16:25.772332 1212267 command_runner.go:130] > # Cgroup setting for conmon
	I0731 23:16:25.772346 1212267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0731 23:16:25.772355 1212267 command_runner.go:130] > conmon_cgroup = "pod"
	I0731 23:16:25.772365 1212267 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0731 23:16:25.772376 1212267 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0731 23:16:25.772386 1212267 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0731 23:16:25.772394 1212267 command_runner.go:130] > conmon_env = [
	I0731 23:16:25.772400 1212267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 23:16:25.772406 1212267 command_runner.go:130] > ]
	I0731 23:16:25.772411 1212267 command_runner.go:130] > # Additional environment variables to set for all the
	I0731 23:16:25.772418 1212267 command_runner.go:130] > # containers. These are overridden if set in the
	I0731 23:16:25.772424 1212267 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0731 23:16:25.772433 1212267 command_runner.go:130] > # default_env = [
	I0731 23:16:25.772438 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.772450 1212267 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0731 23:16:25.772462 1212267 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0731 23:16:25.772470 1212267 command_runner.go:130] > # selinux = false
	I0731 23:16:25.772481 1212267 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0731 23:16:25.772494 1212267 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0731 23:16:25.772504 1212267 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0731 23:16:25.772511 1212267 command_runner.go:130] > # seccomp_profile = ""
	I0731 23:16:25.772521 1212267 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0731 23:16:25.772533 1212267 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0731 23:16:25.772546 1212267 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0731 23:16:25.772557 1212267 command_runner.go:130] > # which might increase security.
	I0731 23:16:25.772564 1212267 command_runner.go:130] > # This option is currently deprecated,
	I0731 23:16:25.772577 1212267 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0731 23:16:25.772588 1212267 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0731 23:16:25.772599 1212267 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0731 23:16:25.772612 1212267 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0731 23:16:25.772625 1212267 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0731 23:16:25.772637 1212267 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0731 23:16:25.772649 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.772659 1212267 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0731 23:16:25.772668 1212267 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0731 23:16:25.772678 1212267 command_runner.go:130] > # the cgroup blockio controller.
	I0731 23:16:25.772686 1212267 command_runner.go:130] > # blockio_config_file = ""
	I0731 23:16:25.772699 1212267 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0731 23:16:25.772708 1212267 command_runner.go:130] > # blockio parameters.
	I0731 23:16:25.772716 1212267 command_runner.go:130] > # blockio_reload = false
	I0731 23:16:25.772729 1212267 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0731 23:16:25.772739 1212267 command_runner.go:130] > # irqbalance daemon.
	I0731 23:16:25.772748 1212267 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0731 23:16:25.772761 1212267 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0731 23:16:25.772777 1212267 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0731 23:16:25.772792 1212267 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0731 23:16:25.772804 1212267 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0731 23:16:25.772818 1212267 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0731 23:16:25.772829 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.772840 1212267 command_runner.go:130] > # rdt_config_file = ""
	I0731 23:16:25.772852 1212267 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0731 23:16:25.772861 1212267 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0731 23:16:25.772899 1212267 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0731 23:16:25.772912 1212267 command_runner.go:130] > # separate_pull_cgroup = ""
	I0731 23:16:25.772922 1212267 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0731 23:16:25.772931 1212267 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0731 23:16:25.772938 1212267 command_runner.go:130] > # will be added.
	I0731 23:16:25.772949 1212267 command_runner.go:130] > # default_capabilities = [
	I0731 23:16:25.772954 1212267 command_runner.go:130] > # 	"CHOWN",
	I0731 23:16:25.772960 1212267 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0731 23:16:25.772969 1212267 command_runner.go:130] > # 	"FSETID",
	I0731 23:16:25.772976 1212267 command_runner.go:130] > # 	"FOWNER",
	I0731 23:16:25.772982 1212267 command_runner.go:130] > # 	"SETGID",
	I0731 23:16:25.772990 1212267 command_runner.go:130] > # 	"SETUID",
	I0731 23:16:25.772997 1212267 command_runner.go:130] > # 	"SETPCAP",
	I0731 23:16:25.773006 1212267 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0731 23:16:25.773015 1212267 command_runner.go:130] > # 	"KILL",
	I0731 23:16:25.773021 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773032 1212267 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0731 23:16:25.773043 1212267 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0731 23:16:25.773053 1212267 command_runner.go:130] > # add_inheritable_capabilities = false
	I0731 23:16:25.773062 1212267 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0731 23:16:25.773075 1212267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 23:16:25.773085 1212267 command_runner.go:130] > default_sysctls = [
	I0731 23:16:25.773093 1212267 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0731 23:16:25.773100 1212267 command_runner.go:130] > ]
	I0731 23:16:25.773109 1212267 command_runner.go:130] > # List of devices on the host that a
	I0731 23:16:25.773122 1212267 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0731 23:16:25.773129 1212267 command_runner.go:130] > # allowed_devices = [
	I0731 23:16:25.773135 1212267 command_runner.go:130] > # 	"/dev/fuse",
	I0731 23:16:25.773140 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773147 1212267 command_runner.go:130] > # List of additional devices. specified as
	I0731 23:16:25.773160 1212267 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0731 23:16:25.773173 1212267 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0731 23:16:25.773186 1212267 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0731 23:16:25.773195 1212267 command_runner.go:130] > # additional_devices = [
	I0731 23:16:25.773201 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773211 1212267 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0731 23:16:25.773220 1212267 command_runner.go:130] > # cdi_spec_dirs = [
	I0731 23:16:25.773226 1212267 command_runner.go:130] > # 	"/etc/cdi",
	I0731 23:16:25.773232 1212267 command_runner.go:130] > # 	"/var/run/cdi",
	I0731 23:16:25.773241 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773251 1212267 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0731 23:16:25.773261 1212267 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0731 23:16:25.773265 1212267 command_runner.go:130] > # Defaults to false.
	I0731 23:16:25.773270 1212267 command_runner.go:130] > # device_ownership_from_security_context = false
	I0731 23:16:25.773278 1212267 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0731 23:16:25.773284 1212267 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0731 23:16:25.773292 1212267 command_runner.go:130] > # hooks_dir = [
	I0731 23:16:25.773300 1212267 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0731 23:16:25.773308 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.773318 1212267 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0731 23:16:25.773331 1212267 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0731 23:16:25.773339 1212267 command_runner.go:130] > # its default mounts from the following two files:
	I0731 23:16:25.773348 1212267 command_runner.go:130] > #
	I0731 23:16:25.773357 1212267 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0731 23:16:25.773371 1212267 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0731 23:16:25.773383 1212267 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0731 23:16:25.773391 1212267 command_runner.go:130] > #
	I0731 23:16:25.773401 1212267 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0731 23:16:25.773414 1212267 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0731 23:16:25.773421 1212267 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0731 23:16:25.773426 1212267 command_runner.go:130] > #      only add mounts it finds in this file.
	I0731 23:16:25.773430 1212267 command_runner.go:130] > #
	I0731 23:16:25.773434 1212267 command_runner.go:130] > # default_mounts_file = ""
	I0731 23:16:25.773439 1212267 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0731 23:16:25.773449 1212267 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0731 23:16:25.773458 1212267 command_runner.go:130] > pids_limit = 1024
	I0731 23:16:25.773468 1212267 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0731 23:16:25.773483 1212267 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0731 23:16:25.773496 1212267 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0731 23:16:25.773512 1212267 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0731 23:16:25.773519 1212267 command_runner.go:130] > # log_size_max = -1
	I0731 23:16:25.773532 1212267 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0731 23:16:25.773542 1212267 command_runner.go:130] > # log_to_journald = false
	I0731 23:16:25.773552 1212267 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0731 23:16:25.773560 1212267 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0731 23:16:25.773566 1212267 command_runner.go:130] > # Path to directory for container attach sockets.
	I0731 23:16:25.773573 1212267 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0731 23:16:25.773579 1212267 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0731 23:16:25.773586 1212267 command_runner.go:130] > # bind_mount_prefix = ""
	I0731 23:16:25.773590 1212267 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0731 23:16:25.773594 1212267 command_runner.go:130] > # read_only = false
	I0731 23:16:25.773602 1212267 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0731 23:16:25.773610 1212267 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0731 23:16:25.773619 1212267 command_runner.go:130] > # live configuration reload.
	I0731 23:16:25.773625 1212267 command_runner.go:130] > # log_level = "info"
	I0731 23:16:25.773639 1212267 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0731 23:16:25.773651 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.773660 1212267 command_runner.go:130] > # log_filter = ""
	I0731 23:16:25.773669 1212267 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0731 23:16:25.773682 1212267 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0731 23:16:25.773692 1212267 command_runner.go:130] > # separated by comma.
	I0731 23:16:25.773704 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773714 1212267 command_runner.go:130] > # uid_mappings = ""
	I0731 23:16:25.773724 1212267 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0731 23:16:25.773735 1212267 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0731 23:16:25.773742 1212267 command_runner.go:130] > # separated by comma.
	I0731 23:16:25.773754 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773762 1212267 command_runner.go:130] > # gid_mappings = ""
	I0731 23:16:25.773771 1212267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0731 23:16:25.773782 1212267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 23:16:25.773789 1212267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 23:16:25.773798 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773804 1212267 command_runner.go:130] > # minimum_mappable_uid = -1
	I0731 23:16:25.773812 1212267 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0731 23:16:25.773820 1212267 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0731 23:16:25.773827 1212267 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0731 23:16:25.773836 1212267 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0731 23:16:25.773844 1212267 command_runner.go:130] > # minimum_mappable_gid = -1
	I0731 23:16:25.773849 1212267 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0731 23:16:25.773857 1212267 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0731 23:16:25.773865 1212267 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0731 23:16:25.773869 1212267 command_runner.go:130] > # ctr_stop_timeout = 30
	I0731 23:16:25.773875 1212267 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0731 23:16:25.773882 1212267 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0731 23:16:25.773887 1212267 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0731 23:16:25.773894 1212267 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0731 23:16:25.773897 1212267 command_runner.go:130] > drop_infra_ctr = false
	I0731 23:16:25.773905 1212267 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0731 23:16:25.773910 1212267 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0731 23:16:25.773919 1212267 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0731 23:16:25.773925 1212267 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0731 23:16:25.773932 1212267 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0731 23:16:25.773939 1212267 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0731 23:16:25.773945 1212267 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0731 23:16:25.773953 1212267 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0731 23:16:25.773956 1212267 command_runner.go:130] > # shared_cpuset = ""
	I0731 23:16:25.773962 1212267 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0731 23:16:25.773968 1212267 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0731 23:16:25.773972 1212267 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0731 23:16:25.773979 1212267 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0731 23:16:25.773986 1212267 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0731 23:16:25.773991 1212267 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0731 23:16:25.773999 1212267 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0731 23:16:25.774003 1212267 command_runner.go:130] > # enable_criu_support = false
	I0731 23:16:25.774011 1212267 command_runner.go:130] > # Enable/disable the generation of the container,
	I0731 23:16:25.774019 1212267 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0731 23:16:25.774023 1212267 command_runner.go:130] > # enable_pod_events = false
	I0731 23:16:25.774031 1212267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 23:16:25.774037 1212267 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0731 23:16:25.774043 1212267 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0731 23:16:25.774049 1212267 command_runner.go:130] > # default_runtime = "runc"
	I0731 23:16:25.774054 1212267 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0731 23:16:25.774063 1212267 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0731 23:16:25.774074 1212267 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0731 23:16:25.774081 1212267 command_runner.go:130] > # creation as a file is not desired either.
	I0731 23:16:25.774089 1212267 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0731 23:16:25.774095 1212267 command_runner.go:130] > # the hostname is being managed dynamically.
	I0731 23:16:25.774100 1212267 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0731 23:16:25.774106 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.774112 1212267 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0731 23:16:25.774120 1212267 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0731 23:16:25.774127 1212267 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0731 23:16:25.774134 1212267 command_runner.go:130] > # Each entry in the table should follow the format:
	I0731 23:16:25.774138 1212267 command_runner.go:130] > #
	I0731 23:16:25.774142 1212267 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0731 23:16:25.774149 1212267 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0731 23:16:25.774170 1212267 command_runner.go:130] > # runtime_type = "oci"
	I0731 23:16:25.774176 1212267 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0731 23:16:25.774181 1212267 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0731 23:16:25.774188 1212267 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0731 23:16:25.774192 1212267 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0731 23:16:25.774198 1212267 command_runner.go:130] > # monitor_env = []
	I0731 23:16:25.774203 1212267 command_runner.go:130] > # privileged_without_host_devices = false
	I0731 23:16:25.774209 1212267 command_runner.go:130] > # allowed_annotations = []
	I0731 23:16:25.774214 1212267 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0731 23:16:25.774219 1212267 command_runner.go:130] > # Where:
	I0731 23:16:25.774224 1212267 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0731 23:16:25.774232 1212267 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0731 23:16:25.774239 1212267 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0731 23:16:25.774246 1212267 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0731 23:16:25.774253 1212267 command_runner.go:130] > #   in $PATH.
	I0731 23:16:25.774259 1212267 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0731 23:16:25.774265 1212267 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0731 23:16:25.774271 1212267 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0731 23:16:25.774277 1212267 command_runner.go:130] > #   state.
	I0731 23:16:25.774283 1212267 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0731 23:16:25.774291 1212267 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0731 23:16:25.774297 1212267 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0731 23:16:25.774304 1212267 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0731 23:16:25.774310 1212267 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0731 23:16:25.774318 1212267 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0731 23:16:25.774322 1212267 command_runner.go:130] > #   The currently recognized values are:
	I0731 23:16:25.774330 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0731 23:16:25.774337 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0731 23:16:25.774345 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0731 23:16:25.774351 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0731 23:16:25.774360 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0731 23:16:25.774365 1212267 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0731 23:16:25.774373 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0731 23:16:25.774381 1212267 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0731 23:16:25.774386 1212267 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0731 23:16:25.774393 1212267 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0731 23:16:25.774399 1212267 command_runner.go:130] > #   deprecated option "conmon".
	I0731 23:16:25.774408 1212267 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0731 23:16:25.774413 1212267 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0731 23:16:25.774421 1212267 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0731 23:16:25.774426 1212267 command_runner.go:130] > #   should be moved to the container's cgroup
	I0731 23:16:25.774434 1212267 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0731 23:16:25.774439 1212267 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0731 23:16:25.774448 1212267 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0731 23:16:25.774453 1212267 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0731 23:16:25.774457 1212267 command_runner.go:130] > #
	I0731 23:16:25.774461 1212267 command_runner.go:130] > # Using the seccomp notifier feature:
	I0731 23:16:25.774466 1212267 command_runner.go:130] > #
	I0731 23:16:25.774472 1212267 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0731 23:16:25.774480 1212267 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0731 23:16:25.774483 1212267 command_runner.go:130] > #
	I0731 23:16:25.774489 1212267 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0731 23:16:25.774497 1212267 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0731 23:16:25.774500 1212267 command_runner.go:130] > #
	I0731 23:16:25.774506 1212267 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0731 23:16:25.774511 1212267 command_runner.go:130] > # feature.
	I0731 23:16:25.774517 1212267 command_runner.go:130] > #
	I0731 23:16:25.774523 1212267 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0731 23:16:25.774532 1212267 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0731 23:16:25.774537 1212267 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0731 23:16:25.774545 1212267 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0731 23:16:25.774551 1212267 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0731 23:16:25.774556 1212267 command_runner.go:130] > #
	I0731 23:16:25.774562 1212267 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0731 23:16:25.774570 1212267 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0731 23:16:25.774573 1212267 command_runner.go:130] > #
	I0731 23:16:25.774580 1212267 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0731 23:16:25.774588 1212267 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0731 23:16:25.774591 1212267 command_runner.go:130] > #
	I0731 23:16:25.774597 1212267 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0731 23:16:25.774605 1212267 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0731 23:16:25.774608 1212267 command_runner.go:130] > # limitation.
	I0731 23:16:25.774614 1212267 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0731 23:16:25.774618 1212267 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0731 23:16:25.774622 1212267 command_runner.go:130] > runtime_type = "oci"
	I0731 23:16:25.774626 1212267 command_runner.go:130] > runtime_root = "/run/runc"
	I0731 23:16:25.774630 1212267 command_runner.go:130] > runtime_config_path = ""
	I0731 23:16:25.774635 1212267 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0731 23:16:25.774641 1212267 command_runner.go:130] > monitor_cgroup = "pod"
	I0731 23:16:25.774646 1212267 command_runner.go:130] > monitor_exec_cgroup = ""
	I0731 23:16:25.774651 1212267 command_runner.go:130] > monitor_env = [
	I0731 23:16:25.774657 1212267 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0731 23:16:25.774662 1212267 command_runner.go:130] > ]
	I0731 23:16:25.774667 1212267 command_runner.go:130] > privileged_without_host_devices = false
	I0731 23:16:25.774674 1212267 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0731 23:16:25.774680 1212267 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0731 23:16:25.774687 1212267 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0731 23:16:25.774696 1212267 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0731 23:16:25.774705 1212267 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0731 23:16:25.774710 1212267 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0731 23:16:25.774721 1212267 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0731 23:16:25.774731 1212267 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0731 23:16:25.774739 1212267 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0731 23:16:25.774746 1212267 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0731 23:16:25.774749 1212267 command_runner.go:130] > # Example:
	I0731 23:16:25.774753 1212267 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0731 23:16:25.774758 1212267 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0731 23:16:25.774762 1212267 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0731 23:16:25.774770 1212267 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0731 23:16:25.774774 1212267 command_runner.go:130] > # cpuset = 0
	I0731 23:16:25.774778 1212267 command_runner.go:130] > # cpushares = "0-1"
	I0731 23:16:25.774781 1212267 command_runner.go:130] > # Where:
	I0731 23:16:25.774785 1212267 command_runner.go:130] > # The workload name is workload-type.
	I0731 23:16:25.774791 1212267 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0731 23:16:25.774796 1212267 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0731 23:16:25.774801 1212267 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0731 23:16:25.774809 1212267 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0731 23:16:25.774814 1212267 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0731 23:16:25.774818 1212267 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0731 23:16:25.774824 1212267 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0731 23:16:25.774829 1212267 command_runner.go:130] > # Default value is set to true
	I0731 23:16:25.774833 1212267 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0731 23:16:25.774838 1212267 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0731 23:16:25.774843 1212267 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0731 23:16:25.774847 1212267 command_runner.go:130] > # Default value is set to 'false'
	I0731 23:16:25.774850 1212267 command_runner.go:130] > # disable_hostport_mapping = false
	I0731 23:16:25.774856 1212267 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0731 23:16:25.774859 1212267 command_runner.go:130] > #
	I0731 23:16:25.774864 1212267 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0731 23:16:25.774870 1212267 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0731 23:16:25.774875 1212267 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0731 23:16:25.774881 1212267 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0731 23:16:25.774886 1212267 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0731 23:16:25.774889 1212267 command_runner.go:130] > [crio.image]
	I0731 23:16:25.774893 1212267 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0731 23:16:25.774897 1212267 command_runner.go:130] > # default_transport = "docker://"
	I0731 23:16:25.774903 1212267 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0731 23:16:25.774910 1212267 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0731 23:16:25.774913 1212267 command_runner.go:130] > # global_auth_file = ""
	I0731 23:16:25.774918 1212267 command_runner.go:130] > # The image used to instantiate infra containers.
	I0731 23:16:25.774926 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.774930 1212267 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0731 23:16:25.774936 1212267 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0731 23:16:25.774941 1212267 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0731 23:16:25.774945 1212267 command_runner.go:130] > # This option supports live configuration reload.
	I0731 23:16:25.774949 1212267 command_runner.go:130] > # pause_image_auth_file = ""
	I0731 23:16:25.774954 1212267 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0731 23:16:25.774959 1212267 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0731 23:16:25.774965 1212267 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0731 23:16:25.774974 1212267 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0731 23:16:25.774978 1212267 command_runner.go:130] > # pause_command = "/pause"
	I0731 23:16:25.774986 1212267 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0731 23:16:25.774994 1212267 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0731 23:16:25.775003 1212267 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0731 23:16:25.775010 1212267 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0731 23:16:25.775016 1212267 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0731 23:16:25.775024 1212267 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0731 23:16:25.775030 1212267 command_runner.go:130] > # pinned_images = [
	I0731 23:16:25.775033 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775041 1212267 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0731 23:16:25.775048 1212267 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0731 23:16:25.775056 1212267 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0731 23:16:25.775064 1212267 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0731 23:16:25.775069 1212267 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0731 23:16:25.775075 1212267 command_runner.go:130] > # signature_policy = ""
	I0731 23:16:25.775080 1212267 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0731 23:16:25.775088 1212267 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0731 23:16:25.775096 1212267 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0731 23:16:25.775102 1212267 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0731 23:16:25.775109 1212267 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0731 23:16:25.775114 1212267 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0731 23:16:25.775122 1212267 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0731 23:16:25.775130 1212267 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0731 23:16:25.775135 1212267 command_runner.go:130] > # changing them here.
	I0731 23:16:25.775144 1212267 command_runner.go:130] > # insecure_registries = [
	I0731 23:16:25.775147 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775154 1212267 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0731 23:16:25.775161 1212267 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0731 23:16:25.775165 1212267 command_runner.go:130] > # image_volumes = "mkdir"
	I0731 23:16:25.775172 1212267 command_runner.go:130] > # Temporary directory to use for storing big files
	I0731 23:16:25.775176 1212267 command_runner.go:130] > # big_files_temporary_dir = ""
	I0731 23:16:25.775184 1212267 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0731 23:16:25.775191 1212267 command_runner.go:130] > # CNI plugins.
	I0731 23:16:25.775194 1212267 command_runner.go:130] > [crio.network]
	I0731 23:16:25.775202 1212267 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0731 23:16:25.775208 1212267 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0731 23:16:25.775214 1212267 command_runner.go:130] > # cni_default_network = ""
	I0731 23:16:25.775219 1212267 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0731 23:16:25.775225 1212267 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0731 23:16:25.775231 1212267 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0731 23:16:25.775236 1212267 command_runner.go:130] > # plugin_dirs = [
	I0731 23:16:25.775243 1212267 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0731 23:16:25.775249 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775256 1212267 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0731 23:16:25.775262 1212267 command_runner.go:130] > [crio.metrics]
	I0731 23:16:25.775266 1212267 command_runner.go:130] > # Globally enable or disable metrics support.
	I0731 23:16:25.775272 1212267 command_runner.go:130] > enable_metrics = true
	I0731 23:16:25.775277 1212267 command_runner.go:130] > # Specify enabled metrics collectors.
	I0731 23:16:25.775284 1212267 command_runner.go:130] > # Per default all metrics are enabled.
	I0731 23:16:25.775290 1212267 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0731 23:16:25.775299 1212267 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0731 23:16:25.775305 1212267 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0731 23:16:25.775311 1212267 command_runner.go:130] > # metrics_collectors = [
	I0731 23:16:25.775314 1212267 command_runner.go:130] > # 	"operations",
	I0731 23:16:25.775319 1212267 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0731 23:16:25.775326 1212267 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0731 23:16:25.775330 1212267 command_runner.go:130] > # 	"operations_errors",
	I0731 23:16:25.775334 1212267 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0731 23:16:25.775338 1212267 command_runner.go:130] > # 	"image_pulls_by_name",
	I0731 23:16:25.775343 1212267 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0731 23:16:25.775349 1212267 command_runner.go:130] > # 	"image_pulls_failures",
	I0731 23:16:25.775353 1212267 command_runner.go:130] > # 	"image_pulls_successes",
	I0731 23:16:25.775359 1212267 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0731 23:16:25.775363 1212267 command_runner.go:130] > # 	"image_layer_reuse",
	I0731 23:16:25.775370 1212267 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0731 23:16:25.775375 1212267 command_runner.go:130] > # 	"containers_oom_total",
	I0731 23:16:25.775379 1212267 command_runner.go:130] > # 	"containers_oom",
	I0731 23:16:25.775386 1212267 command_runner.go:130] > # 	"processes_defunct",
	I0731 23:16:25.775396 1212267 command_runner.go:130] > # 	"operations_total",
	I0731 23:16:25.775403 1212267 command_runner.go:130] > # 	"operations_latency_seconds",
	I0731 23:16:25.775407 1212267 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0731 23:16:25.775412 1212267 command_runner.go:130] > # 	"operations_errors_total",
	I0731 23:16:25.775417 1212267 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0731 23:16:25.775423 1212267 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0731 23:16:25.775428 1212267 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0731 23:16:25.775432 1212267 command_runner.go:130] > # 	"image_pulls_success_total",
	I0731 23:16:25.775436 1212267 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0731 23:16:25.775440 1212267 command_runner.go:130] > # 	"containers_oom_count_total",
	I0731 23:16:25.775445 1212267 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0731 23:16:25.775450 1212267 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0731 23:16:25.775454 1212267 command_runner.go:130] > # ]
	I0731 23:16:25.775459 1212267 command_runner.go:130] > # The port on which the metrics server will listen.
	I0731 23:16:25.775465 1212267 command_runner.go:130] > # metrics_port = 9090
	I0731 23:16:25.775469 1212267 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0731 23:16:25.775474 1212267 command_runner.go:130] > # metrics_socket = ""
	I0731 23:16:25.775479 1212267 command_runner.go:130] > # The certificate for the secure metrics server.
	I0731 23:16:25.775486 1212267 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0731 23:16:25.775492 1212267 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0731 23:16:25.775496 1212267 command_runner.go:130] > # certificate on any modification event.
	I0731 23:16:25.775502 1212267 command_runner.go:130] > # metrics_cert = ""
	I0731 23:16:25.775507 1212267 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0731 23:16:25.775513 1212267 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0731 23:16:25.775522 1212267 command_runner.go:130] > # metrics_key = ""
	I0731 23:16:25.775530 1212267 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0731 23:16:25.775537 1212267 command_runner.go:130] > [crio.tracing]
	I0731 23:16:25.775546 1212267 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0731 23:16:25.775554 1212267 command_runner.go:130] > # enable_tracing = false
	I0731 23:16:25.775562 1212267 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0731 23:16:25.775572 1212267 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0731 23:16:25.775580 1212267 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0731 23:16:25.775588 1212267 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0731 23:16:25.775592 1212267 command_runner.go:130] > # CRI-O NRI configuration.
	I0731 23:16:25.775595 1212267 command_runner.go:130] > [crio.nri]
	I0731 23:16:25.775600 1212267 command_runner.go:130] > # Globally enable or disable NRI.
	I0731 23:16:25.775603 1212267 command_runner.go:130] > # enable_nri = false
	I0731 23:16:25.775608 1212267 command_runner.go:130] > # NRI socket to listen on.
	I0731 23:16:25.775612 1212267 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0731 23:16:25.775617 1212267 command_runner.go:130] > # NRI plugin directory to use.
	I0731 23:16:25.775623 1212267 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0731 23:16:25.775628 1212267 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0731 23:16:25.775635 1212267 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0731 23:16:25.775640 1212267 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0731 23:16:25.775647 1212267 command_runner.go:130] > # nri_disable_connections = false
	I0731 23:16:25.775653 1212267 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0731 23:16:25.775661 1212267 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0731 23:16:25.775669 1212267 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0731 23:16:25.775679 1212267 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0731 23:16:25.775689 1212267 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0731 23:16:25.775697 1212267 command_runner.go:130] > [crio.stats]
	I0731 23:16:25.775703 1212267 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0731 23:16:25.775708 1212267 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0731 23:16:25.775713 1212267 command_runner.go:130] > # stats_collection_period = 0
	I0731 23:16:25.775735 1212267 command_runner.go:130] ! time="2024-07-31 23:16:25.737095924Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0731 23:16:25.775749 1212267 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0731 23:16:25.775873 1212267 cni.go:84] Creating CNI manager for ""
	I0731 23:16:25.775884 1212267 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0731 23:16:25.775893 1212267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:16:25.775918 1212267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-615814 NodeName:multinode-615814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:16:25.776050 1212267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-615814"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:16:25.776132 1212267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:16:25.786883 1212267 command_runner.go:130] > kubeadm
	I0731 23:16:25.786929 1212267 command_runner.go:130] > kubectl
	I0731 23:16:25.786936 1212267 command_runner.go:130] > kubelet
	I0731 23:16:25.786982 1212267 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:16:25.787054 1212267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:16:25.797190 1212267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0731 23:16:25.814789 1212267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:16:25.832074 1212267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0731 23:16:25.849760 1212267 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0731 23:16:25.854052 1212267 command_runner.go:130] > 192.168.39.129	control-plane.minikube.internal
	I0731 23:16:25.854160 1212267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:16:25.995016 1212267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:16:26.011457 1212267 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814 for IP: 192.168.39.129
	I0731 23:16:26.011491 1212267 certs.go:194] generating shared ca certs ...
	I0731 23:16:26.011517 1212267 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:16:26.011681 1212267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 23:16:26.011725 1212267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 23:16:26.011735 1212267 certs.go:256] generating profile certs ...
	I0731 23:16:26.011831 1212267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/client.key
	I0731 23:16:26.011892 1212267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.key.0892758f
	I0731 23:16:26.011925 1212267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.key
	I0731 23:16:26.011936 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0731 23:16:26.011948 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0731 23:16:26.011961 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0731 23:16:26.011976 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0731 23:16:26.011992 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0731 23:16:26.012006 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0731 23:16:26.012018 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0731 23:16:26.012031 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0731 23:16:26.012080 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 23:16:26.012138 1212267 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 23:16:26.012149 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 23:16:26.012171 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 23:16:26.012195 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:16:26.012219 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 23:16:26.012262 1212267 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:16:26.012290 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem -> /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.012306 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.012319 1212267 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.012930 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:16:26.038706 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:16:26.064227 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:16:26.089928 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 23:16:26.115013 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0731 23:16:26.140530 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 23:16:26.165527 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:16:26.191798 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/multinode-615814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0731 23:16:26.217284 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 23:16:26.242456 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 23:16:26.268015 1212267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:16:26.293740 1212267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:16:26.311471 1212267 ssh_runner.go:195] Run: openssl version
	I0731 23:16:26.317834 1212267 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0731 23:16:26.317939 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 23:16:26.330111 1212267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.334943 1212267 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.334986 1212267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.335038 1212267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 23:16:26.341309 1212267 command_runner.go:130] > 3ec20f2e
	I0731 23:16:26.341423 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:16:26.351997 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:16:26.363680 1212267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.368783 1212267 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.368839 1212267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.368883 1212267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:16:26.375879 1212267 command_runner.go:130] > b5213941
	I0731 23:16:26.375979 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:16:26.387116 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 23:16:26.399039 1212267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.403860 1212267 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.403918 1212267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.403965 1212267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 23:16:26.410052 1212267 command_runner.go:130] > 51391683
	I0731 23:16:26.410171 1212267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 23:16:26.420668 1212267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:16:26.425669 1212267 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:16:26.425698 1212267 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0731 23:16:26.425704 1212267 command_runner.go:130] > Device: 253,1	Inode: 7339051     Links: 1
	I0731 23:16:26.425710 1212267 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0731 23:16:26.425717 1212267 command_runner.go:130] > Access: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425722 1212267 command_runner.go:130] > Modify: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425726 1212267 command_runner.go:130] > Change: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425731 1212267 command_runner.go:130] >  Birth: 2024-07-31 23:09:25.050273240 +0000
	I0731 23:16:26.425786 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:16:26.431804 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.431909 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:16:26.437838 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.437945 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:16:26.443873 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.443952 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:16:26.449972 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.450062 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:16:26.456015 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.456131 1212267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:16:26.462039 1212267 command_runner.go:130] > Certificate will not expire
	I0731 23:16:26.462120 1212267 kubeadm.go:392] StartCluster: {Name:multinode-615814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-615814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.94 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.72 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:16:26.462268 1212267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:16:26.462336 1212267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:16:26.498343 1212267 command_runner.go:130] > 9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3
	I0731 23:16:26.498380 1212267 command_runner.go:130] > 1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7
	I0731 23:16:26.498390 1212267 command_runner.go:130] > 4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6
	I0731 23:16:26.498401 1212267 command_runner.go:130] > 3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638
	I0731 23:16:26.498409 1212267 command_runner.go:130] > d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8
	I0731 23:16:26.498417 1212267 command_runner.go:130] > 06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d
	I0731 23:16:26.498425 1212267 command_runner.go:130] > c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f
	I0731 23:16:26.498436 1212267 command_runner.go:130] > 2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625
	I0731 23:16:26.499982 1212267 cri.go:89] found id: "9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3"
	I0731 23:16:26.500004 1212267 cri.go:89] found id: "1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7"
	I0731 23:16:26.500010 1212267 cri.go:89] found id: "4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6"
	I0731 23:16:26.500015 1212267 cri.go:89] found id: "3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638"
	I0731 23:16:26.500019 1212267 cri.go:89] found id: "d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8"
	I0731 23:16:26.500024 1212267 cri.go:89] found id: "06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d"
	I0731 23:16:26.500028 1212267 cri.go:89] found id: "c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f"
	I0731 23:16:26.500032 1212267 cri.go:89] found id: "2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625"
	I0731 23:16:26.500035 1212267 cri.go:89] found id: ""
	I0731 23:16:26.500108 1212267 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.887465021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c3515602-194d-4dda-bb84-f693e7d62dec name=/runtime.v1.RuntimeService/Version
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.889303018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ae7a551-edd8-43a6-8b32-688dd418c8f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.889869114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468032889833639,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ae7a551-edd8-43a6-8b32-688dd418c8f0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.890442905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6dc4e996-f32b-48c7-abce-86a5bcd5c516 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.890523873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6dc4e996-f32b-48c7-abce-86a5bcd5c516 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.890887443Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6dc4e996-f32b-48c7-abce-86a5bcd5c516 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.894499179Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=dad69624-8687-425d-a4fe-3803aa9c22e6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.894964514Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-csqxw,Uid:d26553da-0087-42e4-896d-22b1f3a79f1d,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467826308902858,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:16:32.191486833Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qnjmk,Uid:a37a98d7-a790-4ed5-b579-b1e797f76da4,Namespace:kube-system,Attempt:1,}
,State:SANDBOX_READY,CreatedAt:1722467792567903954,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:16:32.191476009Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&PodSandboxMetadata{Name:kube-proxy-kgb6k,Uid:e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467792558461476,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{
kubernetes.io/config.seen: 2024-07-31T23:16:32.191490733Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e2d9b360-8119-43cc-b5bb-a90064a3de8b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467792554123881,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"
/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T23:16:32.191491833Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&PodSandboxMetadata{Name:kindnet-hmtpd,Uid:a4a7743e-a0ac-46c9-b041-5c4e527bb96b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467792531190623,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,k8s-app: kindnet,pod-template-generat
ion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:16:32.191488029Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-615814,Uid:b10bc625507898a89217593e914604a7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467788689400552,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b10bc625507898a89217593e914604a7,kubernetes.io/config.seen: 2024-07-31T23:16:28.185545677Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&PodSandboxMetadat
a{Name:etcd-multinode-615814,Uid:d60e23990b964a97f772721b6217fdae,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467788679032461,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.129:2379,kubernetes.io/config.hash: d60e23990b964a97f772721b6217fdae,kubernetes.io/config.seen: 2024-07-31T23:16:28.185539595Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-615814,Uid:181d79bf2dfbbe750db3b987b0d19492,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467788651680711,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io
.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.129:8443,kubernetes.io/config.hash: 181d79bf2dfbbe750db3b987b0d19492,kubernetes.io/config.seen: 2024-07-31T23:16:28.185544099Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-615814,Uid:34be164481bffb189a8f543f27bf53f3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722467788649031298,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,tier: control-plane,},Annotations:map[string]string{kuberne
tes.io/config.hash: 34be164481bffb189a8f543f27bf53f3,kubernetes.io/config.seen: 2024-07-31T23:16:28.185546584Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-csqxw,Uid:d26553da-0087-42e4-896d-22b1f3a79f1d,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467458125934134,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:10:57.811449888Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-qnjmk,Uid:a37a98d7-a790-4ed5-b579-b1e797f76da4,Namespace:kube-system,Attemp
t:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467403706877792,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:10:03.393862210Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e2d9b360-8119-43cc-b5bb-a90064a3de8b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467403691146811,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[
string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T23:10:03.384078290Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&PodSandboxMetadata{Name:kube-proxy-kgb6k,Uid:e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,Namespace:kube-
system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467389768398613,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:09:47.956300257Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&PodSandboxMetadata{Name:kindnet-hmtpd,Uid:a4a7743e-a0ac-46c9-b041-5c4e527bb96b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467388251326564,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,k8s-app: kindnet,pod
-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:09:47.940981908Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-615814,Uid:b10bc625507898a89217593e914604a7,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467369010595733,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b10bc625507898a89217593e914604a7,kubernetes.io/config.seen: 2024-07-31T23:09:28.529608673Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-multinode-615814,Uid:34be164481bffb189a8f543f27bf53f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467368990643283,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 34be164481bffb189a8f543f27bf53f3,kubernetes.io/config.seen: 2024-07-31T23:09:28.529609575Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-615814,Uid:181d79bf2dfbbe750db3b987b0d19492,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467368985239114,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name:
kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.129:8443,kubernetes.io/config.hash: 181d79bf2dfbbe750db3b987b0d19492,kubernetes.io/config.seen: 2024-07-31T23:09:28.529607261Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&PodSandboxMetadata{Name:etcd-multinode-615814,Uid:d60e23990b964a97f772721b6217fdae,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722467368974928879,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https:
//192.168.39.129:2379,kubernetes.io/config.hash: d60e23990b964a97f772721b6217fdae,kubernetes.io/config.seen: 2024-07-31T23:09:28.529602578Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=dad69624-8687-425d-a4fe-3803aa9c22e6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.895844987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f7656a4-3e43-4faa-b13e-e8e8141ed5a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.895921504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f7656a4-3e43-4faa-b13e-e8e8141ed5a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.896641367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f7656a4-3e43-4faa-b13e-e8e8141ed5a4 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.938422927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7247360-c43c-404b-b748-7f5d8c1efa70 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.938523268Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7247360-c43c-404b-b748-7f5d8c1efa70 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.939835138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d62ddbb7-32a0-4589-a471-eb279aa8336c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.940347707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468032940250561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d62ddbb7-32a0-4589-a471-eb279aa8336c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.940900507Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=288e30a4-3a00-4439-a735-f1cac3386d29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.940974515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=288e30a4-3a00-4439-a735-f1cac3386d29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.941496446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=288e30a4-3a00-4439-a735-f1cac3386d29 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.982054959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8384631-9ea9-4fb8-a789-3be78d254ed1 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.982145289Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8384631-9ea9-4fb8-a789-3be78d254ed1 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.983159751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b920eee1-7e79-47b3-b7ab-ec314fb8d135 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.983620139Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468032983597882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b920eee1-7e79-47b3-b7ab-ec314fb8d135 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.984117493Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b69d299-61c6-44a9-9445-76a2e04821ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.984172182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b69d299-61c6-44a9-9445-76a2e04821ce name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:20:32 multinode-615814 crio[2851]: time="2024-07-31 23:20:32.985421480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:634238cea87dfd522a4afbc1b6f7c2e0723302042db2ef158be59eabb50aaf4b,PodSandboxId:081e7c7fabc314ca240a9df7a55f6f7d16f644b11d46a4d39f59adc0ad6415a8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722467826440622614,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48,PodSandboxId:407378a1ef10a224c1d92e563c10d196b7bcddcd61e2e55f12dc5eec92a118a0,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722467792967903064,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18,PodSandboxId:c70b70cbb244ad0677eb11a6cd4ed6c5966736afe5a4acdd3ad819ee7cd731d2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722467792989925682,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae,PodSandboxId:9bf2a3208a6bf73094cb7026cc005746dcea46f08af5d9f2d0257b01a7019228,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722467792803791857,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]
string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2442166c3707f6b5f4221023f6212378ba962adfaadd00e33dae9b1294ccbad,PodSandboxId:2735a821fa6f42c302df19f4313e103fd88c6438e8fe66b014fd59a6e3953131,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722467792722594977,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c,PodSandboxId:fbb8cea1e7e757417764e23d55d44e5400fa7a58bba63b59bf36a6aa997a64fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722467788913145592,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string
]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983,PodSandboxId:87e81de4f3188edd03f3a7e388af2142b483a6fd3b2655133e3c7635648fc680,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722467788872897441,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,},Annotations:map[string]string{io.kubernetes.container.hash: e37353
df,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542,PodSandboxId:cbf1afd582579aee58284f85977119621137c8c06101fe81032203ff4cb71325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722467788815988610,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45,PodSandboxId:7889254fb9ddda1b07574262efe16b1ff037335e6fe4e994bdcdab3bef673e2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722467788799317842,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0198443caaecdd03f1e10333dfbbee59233bcc806396e24cc89729cb7447b2b,PodSandboxId:dd0e7cd817ff0030b99e69ce0bb7d14eb4cc29cba9f4d7462e9af910c2c73902,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722467459737893713,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-csqxw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d26553da-0087-42e4-896d-22b1f3a79f1d,},Annotations:map[string]string{io.kubernetes.container.hash: 11feacb0,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3,PodSandboxId:4db4d8ca82c04b4f264ea7cd645dbf80edc596faa387b73eda0b8bcc2bb1de78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722467403929448206,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-qnjmk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37a98d7-a790-4ed5-b579-b1e797f76da4,},Annotations:map[string]string{io.kubernetes.container.hash: fb4c57c7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f0ff197fe4e76a69992dead13c1731d2c9addcf3daef1ffc3a0f9b5a6ce48e7,PodSandboxId:6de822223e246b55811420f29f3cb1b5f11c0a0d0e15643230941cba2aeb75d3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722467403848800962,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: e2d9b360-8119-43cc-b5bb-a90064a3de8b,},Annotations:map[string]string{io.kubernetes.container.hash: 160829b0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6,PodSandboxId:2c2f786f39b7bf49e72f083e5c0d6f9a1bd07174010f4936559ee6571baf04ee,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722467391805079728,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-hmtpd,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: a4a7743e-a0ac-46c9-b041-5c4e527bb96b,},Annotations:map[string]string{io.kubernetes.container.hash: 47f3a2cd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638,PodSandboxId:87fd03202426c7121a1c4267ab1fdd7459f1d3d6464f39a22e8aee03791d5a73,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722467389877001759,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kgb6k,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: e3359694-2a08-4a1b-8a0a-3f2e12dca7cb,},Annotations:map[string]string{io.kubernetes.container.hash: ec9787a7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8,PodSandboxId:41204e620933f09192491f3a87633339ff053f82a41ccb46ab58e7062abd453a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722467369231417507,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34be164481bffb189a8f543f27bf53f3,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d,PodSandboxId:028928f1ce584786101582aeeccc466c906a9734020620361c08425eaa0310fb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722467369198204764,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: b10bc625507898a89217593e914604a7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625,PodSandboxId:c4e3ca8ab9201c594a3c595b574e3d9d2af547452a6473fb9b2d9707f9ecc88d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722467369153392589,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60e23990b964a97f772721b6217fdae,
},Annotations:map[string]string{io.kubernetes.container.hash: e37353df,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f,PodSandboxId:c02cee6317a3d5c921a5c186764f3bbd483d401cbd369f7a6d83ff2138fe6eda,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722467369174677751,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-615814,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 181d79bf2dfbbe750db3b987b0d19492,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b69d299-61c6-44a9-9445-76a2e04821ce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	634238cea87df       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   081e7c7fabc31       busybox-fc5497c4f-csqxw
	b62a1c59ee059       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   c70b70cbb244a       coredns-7db6d8ff4d-qnjmk
	ceb39a1b6f6d9       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   407378a1ef10a       kindnet-hmtpd
	81b413f3a1447       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   9bf2a3208a6bf       kube-proxy-kgb6k
	d2442166c3707       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   2735a821fa6f4       storage-provisioner
	813a6c631fb5d       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   fbb8cea1e7e75       kube-controller-manager-multinode-615814
	287f021b9eafa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   87e81de4f3188       etcd-multinode-615814
	67726bf0c8778       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   cbf1afd582579       kube-apiserver-multinode-615814
	600c3b77043c1       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   7889254fb9ddd       kube-scheduler-multinode-615814
	b0198443caaec       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   dd0e7cd817ff0       busybox-fc5497c4f-csqxw
	9416bbb6bdebf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   4db4d8ca82c04       coredns-7db6d8ff4d-qnjmk
	1f0ff197fe4e7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   6de822223e246       storage-provisioner
	4dbba9426fe30       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   2c2f786f39b7b       kindnet-hmtpd
	3b0d3881582de       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   87fd03202426c       kube-proxy-kgb6k
	d9f554e5a9b24       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   41204e620933f       kube-scheduler-multinode-615814
	06d82efdc6cac       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   028928f1ce584       kube-controller-manager-multinode-615814
	c72466f6d47ef       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   c02cee6317a3d       kube-apiserver-multinode-615814
	2b8097a110225       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   c4e3ca8ab9201       etcd-multinode-615814
	
	
	==> coredns [9416bbb6bdebf0f1be431fefba547efea3b84beec7e2b0db138cafb4e61c56b3] <==
	[INFO] 10.244.1.2:37763 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001977813s
	[INFO] 10.244.1.2:41883 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000126602s
	[INFO] 10.244.1.2:42644 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077367s
	[INFO] 10.244.1.2:41581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001418364s
	[INFO] 10.244.1.2:53576 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091426s
	[INFO] 10.244.1.2:38644 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173573s
	[INFO] 10.244.1.2:34235 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091008s
	[INFO] 10.244.0.3:59285 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166399s
	[INFO] 10.244.0.3:53189 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000060775s
	[INFO] 10.244.0.3:58617 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000050319s
	[INFO] 10.244.0.3:56987 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102267s
	[INFO] 10.244.1.2:42379 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018641s
	[INFO] 10.244.1.2:45222 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098353s
	[INFO] 10.244.1.2:34766 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159215s
	[INFO] 10.244.1.2:36921 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00009059s
	[INFO] 10.244.0.3:39447 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167401s
	[INFO] 10.244.0.3:46406 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000130046s
	[INFO] 10.244.0.3:58958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000123333s
	[INFO] 10.244.0.3:47234 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000100538s
	[INFO] 10.244.1.2:52581 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000208333s
	[INFO] 10.244.1.2:46479 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000098191s
	[INFO] 10.244.1.2:34113 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126232s
	[INFO] 10.244.1.2:48953 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008213s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b62a1c59ee05950bd51a41ad4264af099a21feea6709f28d502bb7b5d635ad18] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50899 - 41257 "HINFO IN 920521463057196509.9166619468331811298. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.066323696s
	
	
	==> describe nodes <==
	Name:               multinode-615814
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-615814
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-615814
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_09_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:09:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-615814
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:20:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:09:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:09:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:09:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:16:31 +0000   Wed, 31 Jul 2024 23:10:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    multinode-615814
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac524db6e09f4202881a55f034e78507
	  System UUID:                ac524db6-e09f-4202-881a-55f034e78507
	  Boot ID:                    fc4f4b6e-22ae-48c7-9dc9-7666b57c3854
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-csqxw                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m36s
	  kube-system                 coredns-7db6d8ff4d-qnjmk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-615814                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-hmtpd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-615814             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-615814    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-kgb6k                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-615814             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  Starting                 11m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x6 over 11m)    kubelet          Node multinode-615814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x6 over 11m)    kubelet          Node multinode-615814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x5 over 11m)    kubelet          Node multinode-615814 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-615814 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-615814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-615814 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-615814 event: Registered Node multinode-615814 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-615814 status is now: NodeReady
	  Normal  Starting                 4m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m5s (x8 over 4m5s)  kubelet          Node multinode-615814 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m5s (x8 over 4m5s)  kubelet          Node multinode-615814 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m5s (x7 over 4m5s)  kubelet          Node multinode-615814 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-615814 event: Registered Node multinode-615814 in Controller
	
	
	Name:               multinode-615814-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-615814-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=multinode-615814
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_31T23_17_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:17:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-615814-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:18:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:18:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:18:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:18:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 31 Jul 2024 23:17:39 +0000   Wed, 31 Jul 2024 23:18:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    multinode-615814-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef74021fba6b4e7588591b3dc5e480db
	  System UUID:                ef74021f-ba6b-4e75-8859-1b3dc5e480db
	  Boot ID:                    76250e4c-9596-4486-8cbe-9b2c54afb1f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zxdtw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-flflz              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m57s
	  kube-system                 kube-proxy-swdtj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m20s                  kube-proxy       
	  Normal  Starting                 9m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m58s (x2 over 9m58s)  kubelet          Node multinode-615814-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x2 over 9m58s)  kubelet          Node multinode-615814-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x2 over 9m58s)  kubelet          Node multinode-615814-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m38s                  kubelet          Node multinode-615814-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m25s (x2 over 3m25s)  kubelet          Node multinode-615814-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x2 over 3m25s)  kubelet          Node multinode-615814-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x2 over 3m25s)  kubelet          Node multinode-615814-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m6s                   kubelet          Node multinode-615814-m02 status is now: NodeReady
	  Normal  NodeNotReady             99s                    node-controller  Node multinode-615814-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.169142] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.160559] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.292502] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.387556] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +0.057539] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.921147] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.075003] kauditd_printk_skb: 44 callbacks suppressed
	[  +6.519215] systemd-fstab-generator[1273]: Ignoring "noauto" option for root device
	[  +0.092929] kauditd_printk_skb: 43 callbacks suppressed
	[ +13.510325] systemd-fstab-generator[1460]: Ignoring "noauto" option for root device
	[  +0.139899] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 23:10] kauditd_printk_skb: 60 callbacks suppressed
	[ +54.439278] kauditd_printk_skb: 14 callbacks suppressed
	[Jul31 23:16] systemd-fstab-generator[2771]: Ignoring "noauto" option for root device
	[  +0.173017] systemd-fstab-generator[2783]: Ignoring "noauto" option for root device
	[  +0.197267] systemd-fstab-generator[2797]: Ignoring "noauto" option for root device
	[  +0.151393] systemd-fstab-generator[2809]: Ignoring "noauto" option for root device
	[  +0.307205] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +7.167374] systemd-fstab-generator[2934]: Ignoring "noauto" option for root device
	[  +0.080471] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.013474] systemd-fstab-generator[3059]: Ignoring "noauto" option for root device
	[  +4.673070] kauditd_printk_skb: 74 callbacks suppressed
	[ +11.356148] systemd-fstab-generator[3881]: Ignoring "noauto" option for root device
	[  +0.089932] kauditd_printk_skb: 32 callbacks suppressed
	[Jul31 23:17] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [287f021b9eafae0b74b6723e97f61b50dee520e3c86c639294474bec248ee983] <==
	{"level":"info","ts":"2024-07-31T23:16:29.271591Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:16:29.273717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:16:29.29166Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:16:29.302547Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:16:29.30343Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:16:29.315431Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:16:29.315501Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:16:30.418104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:16:30.418168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:16:30.418208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2024-07-31T23:16:30.418222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.418228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.418236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.418253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2024-07-31T23:16:30.42551Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:multinode-615814 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:16:30.425571Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:16:30.425851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:16:30.426164Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:16:30.426206Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:16:30.42746Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:16:30.427911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2024-07-31T23:17:12.611533Z","caller":"traceutil/trace.go:171","msg":"trace[394771171] linearizableReadLoop","detail":"{readStateIndex:1183; appliedIndex:1182; }","duration":"127.528551ms","start":"2024-07-31T23:17:12.483982Z","end":"2024-07-31T23:17:12.611511Z","steps":["trace[394771171] 'read index received'  (duration: 127.26503ms)","trace[394771171] 'applied index is now lower than readState.Index'  (duration: 262.65µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:17:12.611761Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.746049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-615814-m02\" ","response":"range_response_count:1 size:3117"}
	{"level":"info","ts":"2024-07-31T23:17:12.611839Z","caller":"traceutil/trace.go:171","msg":"trace[156676855] range","detail":"{range_begin:/registry/minions/multinode-615814-m02; range_end:; response_count:1; response_revision:1065; }","duration":"127.903914ms","start":"2024-07-31T23:17:12.483924Z","end":"2024-07-31T23:17:12.611828Z","steps":["trace[156676855] 'agreement among raft nodes before linearized reading'  (duration: 127.694397ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:17:12.613764Z","caller":"traceutil/trace.go:171","msg":"trace[593072095] transaction","detail":"{read_only:false; response_revision:1065; number_of_response:1; }","duration":"142.945583ms","start":"2024-07-31T23:17:12.47079Z","end":"2024-07-31T23:17:12.613736Z","steps":["trace[593072095] 'process raft request'  (duration: 140.55982ms)"],"step_count":1}
	
	
	==> etcd [2b8097a110225e93db567db34fd558a15e35264dcedeca867744c7f0f23c9625] <==
	{"level":"info","ts":"2024-07-31T23:09:30.450376Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:09:30.454326Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:09:30.45437Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:09:30.461106Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2024-07-31T23:09:30.464014Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:10:36.097887Z","caller":"traceutil/trace.go:171","msg":"trace[661407891] linearizableReadLoop","detail":"{readStateIndex:499; appliedIndex:497; }","duration":"152.542786ms","start":"2024-07-31T23:10:35.94533Z","end":"2024-07-31T23:10:36.097873Z","steps":["trace[661407891] 'read index received'  (duration: 147.460813ms)","trace[661407891] 'applied index is now lower than readState.Index'  (duration: 5.081525ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:10:36.098671Z","caller":"traceutil/trace.go:171","msg":"trace[46459963] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"219.287745ms","start":"2024-07-31T23:10:35.879366Z","end":"2024-07-31T23:10:36.098654Z","steps":["trace[46459963] 'process raft request'  (duration: 213.242999ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:10:36.099072Z","caller":"traceutil/trace.go:171","msg":"trace[366714090] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"173.643089ms","start":"2024-07-31T23:10:35.925419Z","end":"2024-07-31T23:10:36.099062Z","steps":["trace[366714090] 'process raft request'  (duration: 172.375954ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-31T23:10:36.099175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.842759ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-615814-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-31T23:10:36.099229Z","caller":"traceutil/trace.go:171","msg":"trace[1850576568] range","detail":"{range_begin:/registry/minions/multinode-615814-m02; range_end:; response_count:1; response_revision:474; }","duration":"153.930297ms","start":"2024-07-31T23:10:35.94529Z","end":"2024-07-31T23:10:36.099221Z","steps":["trace[1850576568] 'agreement among raft nodes before linearized reading'  (duration: 153.847328ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:10:39.302709Z","caller":"traceutil/trace.go:171","msg":"trace[1596659419] linearizableReadLoop","detail":"{readStateIndex:538; appliedIndex:537; }","duration":"111.478194ms","start":"2024-07-31T23:10:39.191209Z","end":"2024-07-31T23:10:39.302687Z","steps":["trace[1596659419] 'read index received'  (duration: 30.9481ms)","trace[1596659419] 'applied index is now lower than readState.Index'  (duration: 80.528957ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-31T23:10:39.302864Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.633334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-615814-m02\" ","response":"range_response_count:1 size:3228"}
	{"level":"info","ts":"2024-07-31T23:10:39.302901Z","caller":"traceutil/trace.go:171","msg":"trace[714340579] range","detail":"{range_begin:/registry/minions/multinode-615814-m02; range_end:; response_count:1; response_revision:506; }","duration":"111.711513ms","start":"2024-07-31T23:10:39.191178Z","end":"2024-07-31T23:10:39.30289Z","steps":["trace[714340579] 'agreement among raft nodes before linearized reading'  (duration: 111.60822ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:11:31.003733Z","caller":"traceutil/trace.go:171","msg":"trace[795890352] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"229.11638ms","start":"2024-07-31T23:11:30.774595Z","end":"2024-07-31T23:11:31.003711Z","steps":["trace[795890352] 'process raft request'  (duration: 216.779027ms)","trace[795890352] 'compare'  (duration: 11.965369ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:11:31.004013Z","caller":"traceutil/trace.go:171","msg":"trace[188594877] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"160.762895ms","start":"2024-07-31T23:11:30.843238Z","end":"2024-07-31T23:11:31.004001Z","steps":["trace[188594877] 'process raft request'  (duration: 160.247693ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-31T23:14:46.661101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-31T23:14:46.661151Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-615814","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"]}
	{"level":"warn","ts":"2024-07-31T23:14:46.662632Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T23:14:46.664863Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T23:14:46.699233Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.129:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-31T23:14:46.699439Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.129:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-31T23:14:46.699581Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"245a8df1c58de0e1","current-leader-member-id":"245a8df1c58de0e1"}
	{"level":"info","ts":"2024-07-31T23:14:46.702793Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:14:46.70312Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2024-07-31T23:14:46.703167Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-615814","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"]}
	
	
	==> kernel <==
	 23:20:33 up 11 min,  0 users,  load average: 0.05, 0.13, 0.08
	Linux multinode-615814 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4dbba9426fe3008581c0b0b307078dc3fe0027fe79680a317742469cc09ab9d6] <==
	I0731 23:14:02.820888       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:12.814213       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:12.814298       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:12.814431       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:12.814454       1 main.go:299] handling current node
	I0731 23:14:12.814466       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:12.814471       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:22.821376       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:22.821482       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:22.821630       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:22.821653       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:22.821709       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:22.821731       1 main.go:299] handling current node
	I0731 23:14:32.821441       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:32.821549       1 main.go:299] handling current node
	I0731 23:14:32.821577       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:32.821595       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:32.821745       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:32.821783       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	I0731 23:14:42.821452       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:14:42.821498       1 main.go:299] handling current node
	I0731 23:14:42.821513       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:14:42.821519       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:14:42.821643       1 main.go:295] Handling node with IPs: map[192.168.39.72:{}]
	I0731 23:14:42.821665       1 main.go:322] Node multinode-615814-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [ceb39a1b6f6d9a920e7e3aef3cf1bc5b52f601b7e7c300509ba20765f3992a48] <==
	I0731 23:19:23.841615       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:19:33.832793       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:19:33.832899       1 main.go:299] handling current node
	I0731 23:19:33.832970       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:19:33.832994       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:19:43.841572       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:19:43.841675       1 main.go:299] handling current node
	I0731 23:19:43.841702       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:19:43.841719       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:19:53.841512       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:19:53.841626       1 main.go:299] handling current node
	I0731 23:19:53.841656       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:19:53.841674       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:20:03.841947       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:20:03.842056       1 main.go:299] handling current node
	I0731 23:20:03.842085       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:20:03.842103       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:20:13.840536       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:20:13.840649       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	I0731 23:20:13.840798       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:20:13.840834       1 main.go:299] handling current node
	I0731 23:20:23.833319       1 main.go:295] Handling node with IPs: map[192.168.39.129:{}]
	I0731 23:20:23.833422       1 main.go:299] handling current node
	I0731 23:20:23.833450       1 main.go:295] Handling node with IPs: map[192.168.39.94:{}]
	I0731 23:20:23.833469       1 main.go:322] Node multinode-615814-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [67726bf0c8778c7d1626625b33292a04ba1b7870a56e711e1b0896d186e13542] <==
	I0731 23:16:31.728580       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:16:31.728612       1 policy_source.go:224] refreshing policies
	I0731 23:16:31.746705       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:16:31.752117       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 23:16:31.752454       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 23:16:31.752655       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 23:16:31.752845       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:16:31.758465       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 23:16:31.760248       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 23:16:31.772499       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:16:31.782448       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 23:16:31.783020       1 aggregator.go:165] initial CRD sync complete...
	I0731 23:16:31.783098       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 23:16:31.783124       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 23:16:31.783176       1 cache.go:39] Caches are synced for autoregister controller
	E0731 23:16:31.796414       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 23:16:31.814553       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 23:16:32.665481       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:16:34.209943       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:16:34.339420       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:16:34.352426       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:16:34.432058       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:16:34.439941       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:16:44.919085       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:16:44.943595       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [c72466f6d47ef6a278f9dc7e5da7c99db2deea5f6fc83b656a629afdc969689f] <==
	W0731 23:14:46.690731       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690793       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690863       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690921       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.690978       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691032       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691084       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691138       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691192       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691245       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691378       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691487       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691547       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691601       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691669       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.691789       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.692184       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.692680       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.692761       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.695566       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:14:46.695673       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0731 23:14:46.695887       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 23:14:46.695993       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0731 23:14:46.696027       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0731 23:14:46.701405       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	
	
	==> kube-controller-manager [06d82efdc6cac498d4b88f7820024dafde163dc41d9dd72a1e19a490b6ad993d] <==
	I0731 23:10:07.143210       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0731 23:10:36.099884       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m02\" does not exist"
	I0731 23:10:36.116950       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:10:37.149225       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-615814-m02"
	I0731 23:10:55.260610       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:10:57.823950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.143041ms"
	I0731 23:10:57.843692       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.674306ms"
	I0731 23:10:57.843795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.592µs"
	I0731 23:11:00.249077       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.012236ms"
	I0731 23:11:00.249153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.255µs"
	I0731 23:11:00.945211       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.568427ms"
	I0731 23:11:00.945454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.274µs"
	I0731 23:11:31.006553       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:11:31.006729       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m03\" does not exist"
	I0731 23:11:31.020376       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m03" podCIDRs=["10.244.2.0/24"]
	I0731 23:11:32.174646       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-615814-m03"
	I0731 23:11:51.511451       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m03"
	I0731 23:12:20.613013       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:12:21.939307       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m03\" does not exist"
	I0731 23:12:21.939884       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:12:21.948818       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m03" podCIDRs=["10.244.3.0/24"]
	I0731 23:12:40.970256       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:13:22.230590       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m03"
	I0731 23:13:22.277061       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.394658ms"
	I0731 23:13:22.277244       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.294µs"
	
	
	==> kube-controller-manager [813a6c631fb5df4bac13ef67e733092fb060a5cde1be7b2f853af7c2e9fba44c] <==
	I0731 23:17:08.467673       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m02\" does not exist"
	I0731 23:17:08.482833       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m02" podCIDRs=["10.244.1.0/24"]
	I0731 23:17:10.377959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.116µs"
	I0731 23:17:10.388708       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.956µs"
	I0731 23:17:10.421410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.357µs"
	I0731 23:17:10.430567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.376µs"
	I0731 23:17:10.433806       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.72µs"
	I0731 23:17:15.010362       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.882µs"
	I0731 23:17:27.704046       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:17:27.726983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.686µs"
	I0731 23:17:27.745727       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.829µs"
	I0731 23:17:30.678626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.462869ms"
	I0731 23:17:30.679605       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.493µs"
	I0731 23:17:46.306683       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:17:47.218640       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:17:47.219340       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-615814-m03\" does not exist"
	I0731 23:17:47.230556       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-615814-m03" podCIDRs=["10.244.2.0/24"]
	I0731 23:18:06.153183       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:18:11.546644       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-615814-m02"
	I0731 23:18:54.872398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.519361ms"
	I0731 23:18:54.872595       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.151µs"
	I0731 23:19:24.694085       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-l8qmm"
	I0731 23:19:24.718846       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-l8qmm"
	I0731 23:19:24.719092       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-h6lcx"
	I0731 23:19:24.741463       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-h6lcx"
	
	
	==> kube-proxy [3b0d3881582de9e0d548721ac433345900b5808ac14fcb7f6080a0a406cb7638] <==
	I0731 23:09:50.082692       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:09:50.100344       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	I0731 23:09:50.133766       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:09:50.133835       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:09:50.133854       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:09:50.137692       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:09:50.137986       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:09:50.138014       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:09:50.140024       1 config.go:192] "Starting service config controller"
	I0731 23:09:50.140349       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:09:50.140450       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:09:50.140470       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:09:50.142037       1 config.go:319] "Starting node config controller"
	I0731 23:09:50.142073       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:09:50.241477       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:09:50.241547       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:09:50.242239       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [81b413f3a1447eefd064b9c4597c8b80fa2ec8449862378299d92754979475ae] <==
	I0731 23:16:33.116616       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:16:33.168065       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	I0731 23:16:33.230429       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:16:33.230533       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:16:33.230551       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:16:33.235142       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:16:33.235675       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:16:33.235707       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:16:33.238463       1 config.go:192] "Starting service config controller"
	I0731 23:16:33.238522       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:16:33.238551       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:16:33.238554       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:16:33.238964       1 config.go:319] "Starting node config controller"
	I0731 23:16:33.238998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:16:33.338642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:16:33.338684       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:16:33.339062       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [600c3b77043c1f6e266fb6bfb7f11a7bbd458517ba9441be75b2cf41373e8d45] <==
	I0731 23:16:29.870001       1 serving.go:380] Generated self-signed cert in-memory
	W0731 23:16:31.689802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 23:16:31.689841       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:16:31.689854       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 23:16:31.689860       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 23:16:31.732037       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0731 23:16:31.732103       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:16:31.736119       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 23:16:31.736193       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 23:16:31.736997       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 23:16:31.739386       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0731 23:16:31.836416       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d9f554e5a9b2449955b5c822ecf1005ca98c034fc012d556830775ff46b489d8] <==
	E0731 23:09:31.881375       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:09:31.881404       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:09:31.881430       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:09:32.773909       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 23:09:32.773954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 23:09:32.840077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:09:32.840141       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:09:32.847139       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0731 23:09:32.847191       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0731 23:09:32.869147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:09:32.869356       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:09:32.931687       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 23:09:32.932359       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:09:32.949325       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:09:32.949428       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:09:33.011897       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:09:33.012023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:09:33.061385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:09:33.061523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:09:33.252994       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:09:33.253095       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:09:33.311360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:09:33.311461       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0731 23:09:35.466831       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 23:14:46.662012       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285241    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4a7743e-a0ac-46c9-b041-5c4e527bb96b-lib-modules\") pod \"kindnet-hmtpd\" (UID: \"a4a7743e-a0ac-46c9-b041-5c4e527bb96b\") " pod="kube-system/kindnet-hmtpd"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285296    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3359694-2a08-4a1b-8a0a-3f2e12dca7cb-xtables-lock\") pod \"kube-proxy-kgb6k\" (UID: \"e3359694-2a08-4a1b-8a0a-3f2e12dca7cb\") " pod="kube-system/kube-proxy-kgb6k"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: I0731 23:16:32.285312    3066 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3359694-2a08-4a1b-8a0a-3f2e12dca7cb-lib-modules\") pod \"kube-proxy-kgb6k\" (UID: \"e3359694-2a08-4a1b-8a0a-3f2e12dca7cb\") " pod="kube-system/kube-proxy-kgb6k"
	Jul 31 23:16:32 multinode-615814 kubelet[3066]: E0731 23:16:32.337641    3066 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-615814\" already exists" pod="kube-system/kube-apiserver-multinode-615814"
	Jul 31 23:16:38 multinode-615814 kubelet[3066]: I0731 23:16:38.942413    3066 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 31 23:17:28 multinode-615814 kubelet[3066]: E0731 23:17:28.279674    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:17:28 multinode-615814 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:18:28 multinode-615814 kubelet[3066]: E0731 23:18:28.279987    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:18:28 multinode-615814 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:18:28 multinode-615814 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:18:28 multinode-615814 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:18:28 multinode-615814 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:19:28 multinode-615814 kubelet[3066]: E0731 23:19:28.280574    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:19:28 multinode-615814 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:19:28 multinode-615814 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:19:28 multinode-615814 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:19:28 multinode-615814 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 31 23:20:28 multinode-615814 kubelet[3066]: E0731 23:20:28.281101    3066 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 31 23:20:28 multinode-615814 kubelet[3066]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 31 23:20:28 multinode-615814 kubelet[3066]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 31 23:20:28 multinode-615814 kubelet[3066]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 31 23:20:28 multinode-615814 kubelet[3066]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0731 23:20:32.559325 1214589 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19312-1172186/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-615814 -n multinode-615814
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-615814 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.50s)

                                                
                                    
x
+
TestPreload (173.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-931367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0731 23:24:53.720767 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-931367 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m39.501614302s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-931367 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-931367 image pull gcr.io/k8s-minikube/busybox: (1.790904186s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-931367
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-931367: (7.325197122s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-931367 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-931367 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.331047771s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-931367 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-31 23:27:20.993494862 +0000 UTC m=+5469.863386233
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-931367 -n test-preload-931367
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-931367 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-931367 logs -n 25: (1.095468721s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814 sudo cat                                       | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /home/docker/cp-test_multinode-615814-m03_multinode-615814.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt                       | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m02:/home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n                                                                 | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | multinode-615814-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-615814 ssh -n multinode-615814-m02 sudo cat                                   | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | /home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-615814 node stop m03                                                          | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	| node    | multinode-615814 node start                                                             | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC | 31 Jul 24 23:12 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC |                     |
	| stop    | -p multinode-615814                                                                     | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:12 UTC |                     |
	| start   | -p multinode-615814                                                                     | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:14 UTC | 31 Jul 24 23:18 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC |                     |
	| node    | multinode-615814 node delete                                                            | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC | 31 Jul 24 23:18 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-615814 stop                                                                   | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:18 UTC |                     |
	| start   | -p multinode-615814                                                                     | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:20 UTC | 31 Jul 24 23:23 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-615814                                                                | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:23 UTC |                     |
	| start   | -p multinode-615814-m02                                                                 | multinode-615814-m02 | jenkins | v1.33.1 | 31 Jul 24 23:23 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-615814-m03                                                                 | multinode-615814-m03 | jenkins | v1.33.1 | 31 Jul 24 23:23 UTC | 31 Jul 24 23:24 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-615814                                                                 | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:24 UTC |                     |
	| delete  | -p multinode-615814-m03                                                                 | multinode-615814-m03 | jenkins | v1.33.1 | 31 Jul 24 23:24 UTC | 31 Jul 24 23:24 UTC |
	| delete  | -p multinode-615814                                                                     | multinode-615814     | jenkins | v1.33.1 | 31 Jul 24 23:24 UTC | 31 Jul 24 23:24 UTC |
	| start   | -p test-preload-931367                                                                  | test-preload-931367  | jenkins | v1.33.1 | 31 Jul 24 23:24 UTC | 31 Jul 24 23:26 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-931367 image pull                                                          | test-preload-931367  | jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:26 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-931367                                                                  | test-preload-931367  | jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:26 UTC |
	| start   | -p test-preload-931367                                                                  | test-preload-931367  | jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:27 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-931367 image list                                                          | test-preload-931367  | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:27 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:26:19
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:26:19.486501 1217002 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:26:19.486639 1217002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:26:19.486648 1217002 out.go:304] Setting ErrFile to fd 2...
	I0731 23:26:19.486653 1217002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:26:19.486826 1217002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:26:19.487434 1217002 out.go:298] Setting JSON to false
	I0731 23:26:19.488596 1217002 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":25730,"bootTime":1722442649,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:26:19.488675 1217002 start.go:139] virtualization: kvm guest
	I0731 23:26:19.490842 1217002 out.go:177] * [test-preload-931367] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:26:19.492382 1217002 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:26:19.492401 1217002 notify.go:220] Checking for updates...
	I0731 23:26:19.494780 1217002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:26:19.496069 1217002 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:26:19.497458 1217002 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:26:19.498681 1217002 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:26:19.499943 1217002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:26:19.501528 1217002 config.go:182] Loaded profile config "test-preload-931367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 23:26:19.501978 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:26:19.502038 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:26:19.517612 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0731 23:26:19.518100 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:26:19.518688 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:26:19.518723 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:26:19.519155 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:26:19.519430 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:19.521131 1217002 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 23:26:19.522559 1217002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:26:19.523046 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:26:19.523112 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:26:19.538645 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45099
	I0731 23:26:19.539159 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:26:19.539664 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:26:19.539695 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:26:19.540069 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:26:19.540358 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:19.578101 1217002 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 23:26:19.579439 1217002 start.go:297] selected driver: kvm2
	I0731 23:26:19.579458 1217002 start.go:901] validating driver "kvm2" against &{Name:test-preload-931367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-931367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:26:19.579564 1217002 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:26:19.580403 1217002 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:26:19.580495 1217002 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:26:19.596666 1217002 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:26:19.597048 1217002 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:26:19.597110 1217002 cni.go:84] Creating CNI manager for ""
	I0731 23:26:19.597119 1217002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:26:19.597171 1217002 start.go:340] cluster config:
	{Name:test-preload-931367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-931367 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:26:19.597278 1217002 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:26:19.599127 1217002 out.go:177] * Starting "test-preload-931367" primary control-plane node in "test-preload-931367" cluster
	I0731 23:26:19.600310 1217002 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 23:26:19.623092 1217002 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 23:26:19.623127 1217002 cache.go:56] Caching tarball of preloaded images
	I0731 23:26:19.623310 1217002 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 23:26:19.624876 1217002 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0731 23:26:19.625997 1217002 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 23:26:19.651149 1217002 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0731 23:26:22.548623 1217002 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 23:26:22.548753 1217002 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0731 23:26:23.429405 1217002 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0731 23:26:23.429573 1217002 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/config.json ...
	I0731 23:26:23.429839 1217002 start.go:360] acquireMachinesLock for test-preload-931367: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:26:23.429922 1217002 start.go:364] duration metric: took 58.127µs to acquireMachinesLock for "test-preload-931367"
	I0731 23:26:23.429945 1217002 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:26:23.429953 1217002 fix.go:54] fixHost starting: 
	I0731 23:26:23.430300 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:26:23.430333 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:26:23.445913 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41417
	I0731 23:26:23.446440 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:26:23.446962 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:26:23.446981 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:26:23.447355 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:26:23.447621 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:23.447785 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetState
	I0731 23:26:23.450017 1217002 fix.go:112] recreateIfNeeded on test-preload-931367: state=Stopped err=<nil>
	I0731 23:26:23.450051 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	W0731 23:26:23.450284 1217002 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:26:23.452300 1217002 out.go:177] * Restarting existing kvm2 VM for "test-preload-931367" ...
	I0731 23:26:23.453438 1217002 main.go:141] libmachine: (test-preload-931367) Calling .Start
	I0731 23:26:23.453716 1217002 main.go:141] libmachine: (test-preload-931367) Ensuring networks are active...
	I0731 23:26:23.454751 1217002 main.go:141] libmachine: (test-preload-931367) Ensuring network default is active
	I0731 23:26:23.455071 1217002 main.go:141] libmachine: (test-preload-931367) Ensuring network mk-test-preload-931367 is active
	I0731 23:26:23.455462 1217002 main.go:141] libmachine: (test-preload-931367) Getting domain xml...
	I0731 23:26:23.456249 1217002 main.go:141] libmachine: (test-preload-931367) Creating domain...
	I0731 23:26:24.708223 1217002 main.go:141] libmachine: (test-preload-931367) Waiting to get IP...
	I0731 23:26:24.709106 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:24.709518 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:24.709599 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:24.709497 1217037 retry.go:31] will retry after 295.881843ms: waiting for machine to come up
	I0731 23:26:25.007352 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:25.007872 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:25.007901 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:25.007828 1217037 retry.go:31] will retry after 284.171501ms: waiting for machine to come up
	I0731 23:26:25.293327 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:25.293885 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:25.293914 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:25.293837 1217037 retry.go:31] will retry after 432.820739ms: waiting for machine to come up
	I0731 23:26:25.728710 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:25.729148 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:25.729178 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:25.729107 1217037 retry.go:31] will retry after 531.621755ms: waiting for machine to come up
	I0731 23:26:26.261838 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:26.262336 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:26.262368 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:26.262287 1217037 retry.go:31] will retry after 562.123352ms: waiting for machine to come up
	I0731 23:26:26.826158 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:26.826577 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:26.826611 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:26.826517 1217037 retry.go:31] will retry after 652.847167ms: waiting for machine to come up
	I0731 23:26:27.481567 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:27.482079 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:27.482104 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:27.482047 1217037 retry.go:31] will retry after 855.287603ms: waiting for machine to come up
	I0731 23:26:28.339539 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:28.339891 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:28.339918 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:28.339822 1217037 retry.go:31] will retry after 1.017669889s: waiting for machine to come up
	I0731 23:26:29.359588 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:29.360114 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:29.360147 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:29.360066 1217037 retry.go:31] will retry after 1.328635592s: waiting for machine to come up
	I0731 23:26:30.690718 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:30.691263 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:30.691298 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:30.691204 1217037 retry.go:31] will retry after 1.771645057s: waiting for machine to come up
	I0731 23:26:32.464176 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:32.464698 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:32.464732 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:32.464632 1217037 retry.go:31] will retry after 2.300281801s: waiting for machine to come up
	I0731 23:26:34.766821 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:34.767246 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:34.767274 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:34.767205 1217037 retry.go:31] will retry after 2.88690907s: waiting for machine to come up
	I0731 23:26:37.657375 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:37.657856 1217002 main.go:141] libmachine: (test-preload-931367) DBG | unable to find current IP address of domain test-preload-931367 in network mk-test-preload-931367
	I0731 23:26:37.657892 1217002 main.go:141] libmachine: (test-preload-931367) DBG | I0731 23:26:37.657827 1217037 retry.go:31] will retry after 4.242591776s: waiting for machine to come up
	I0731 23:26:41.901748 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:41.902391 1217002 main.go:141] libmachine: (test-preload-931367) Found IP for machine: 192.168.39.221
	I0731 23:26:41.902418 1217002 main.go:141] libmachine: (test-preload-931367) Reserving static IP address...
	I0731 23:26:41.902435 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has current primary IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:41.902911 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "test-preload-931367", mac: "52:54:00:d2:fd:84", ip: "192.168.39.221"} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:41.902941 1217002 main.go:141] libmachine: (test-preload-931367) DBG | skip adding static IP to network mk-test-preload-931367 - found existing host DHCP lease matching {name: "test-preload-931367", mac: "52:54:00:d2:fd:84", ip: "192.168.39.221"}
	I0731 23:26:41.902954 1217002 main.go:141] libmachine: (test-preload-931367) Reserved static IP address: 192.168.39.221
	I0731 23:26:41.902971 1217002 main.go:141] libmachine: (test-preload-931367) Waiting for SSH to be available...
	I0731 23:26:41.902983 1217002 main.go:141] libmachine: (test-preload-931367) DBG | Getting to WaitForSSH function...
	I0731 23:26:41.905756 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:41.906187 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:41.906219 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:41.906341 1217002 main.go:141] libmachine: (test-preload-931367) DBG | Using SSH client type: external
	I0731 23:26:41.906406 1217002 main.go:141] libmachine: (test-preload-931367) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa (-rw-------)
	I0731 23:26:41.906442 1217002 main.go:141] libmachine: (test-preload-931367) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.221 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 23:26:41.906463 1217002 main.go:141] libmachine: (test-preload-931367) DBG | About to run SSH command:
	I0731 23:26:41.906484 1217002 main.go:141] libmachine: (test-preload-931367) DBG | exit 0
	I0731 23:26:42.032198 1217002 main.go:141] libmachine: (test-preload-931367) DBG | SSH cmd err, output: <nil>: 
	I0731 23:26:42.032582 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetConfigRaw
	I0731 23:26:42.033241 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetIP
	I0731 23:26:42.035979 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.036384 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.036416 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.036675 1217002 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/config.json ...
	I0731 23:26:42.036914 1217002 machine.go:94] provisionDockerMachine start ...
	I0731 23:26:42.036938 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:42.037189 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:42.039855 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.040252 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.040274 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.040466 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:42.040667 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.040831 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.040950 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:42.041150 1217002 main.go:141] libmachine: Using SSH client type: native
	I0731 23:26:42.041375 1217002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0731 23:26:42.041388 1217002 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:26:42.148531 1217002 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0731 23:26:42.148561 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetMachineName
	I0731 23:26:42.148892 1217002 buildroot.go:166] provisioning hostname "test-preload-931367"
	I0731 23:26:42.148925 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetMachineName
	I0731 23:26:42.149128 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:42.151837 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.152191 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.152216 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.152457 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:42.152702 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.152889 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.153122 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:42.153272 1217002 main.go:141] libmachine: Using SSH client type: native
	I0731 23:26:42.153452 1217002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0731 23:26:42.153466 1217002 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-931367 && echo "test-preload-931367" | sudo tee /etc/hostname
	I0731 23:26:42.276019 1217002 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-931367
	
	I0731 23:26:42.276060 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:42.278861 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.279203 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.279240 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.279479 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:42.279698 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.279890 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.280026 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:42.280188 1217002 main.go:141] libmachine: Using SSH client type: native
	I0731 23:26:42.280361 1217002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0731 23:26:42.280378 1217002 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-931367' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-931367/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-931367' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:26:42.393046 1217002 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:26:42.393078 1217002 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:26:42.393110 1217002 buildroot.go:174] setting up certificates
	I0731 23:26:42.393125 1217002 provision.go:84] configureAuth start
	I0731 23:26:42.393138 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetMachineName
	I0731 23:26:42.393489 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetIP
	I0731 23:26:42.396433 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.396854 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.396906 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.397069 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:42.399499 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.399872 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.399908 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.400084 1217002 provision.go:143] copyHostCerts
	I0731 23:26:42.400169 1217002 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:26:42.400181 1217002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:26:42.400251 1217002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:26:42.400351 1217002 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:26:42.400362 1217002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:26:42.400396 1217002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:26:42.400477 1217002 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:26:42.400486 1217002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:26:42.400513 1217002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:26:42.400577 1217002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.test-preload-931367 san=[127.0.0.1 192.168.39.221 localhost minikube test-preload-931367]
	I0731 23:26:42.956968 1217002 provision.go:177] copyRemoteCerts
	I0731 23:26:42.957046 1217002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:26:42.957085 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:42.960355 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.960784 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:42.960808 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:42.960981 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:42.961187 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:42.961335 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:42.961434 1217002 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa Username:docker}
	I0731 23:26:43.042484 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:26:43.068149 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0731 23:26:43.093250 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:26:43.118366 1217002 provision.go:87] duration metric: took 725.224017ms to configureAuth
	I0731 23:26:43.118410 1217002 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:26:43.118611 1217002 config.go:182] Loaded profile config "test-preload-931367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 23:26:43.118694 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:43.121823 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.122304 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:43.122337 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.122451 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:43.122683 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.122875 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.123005 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:43.123156 1217002 main.go:141] libmachine: Using SSH client type: native
	I0731 23:26:43.123373 1217002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0731 23:26:43.123393 1217002 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:26:43.387688 1217002 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:26:43.387725 1217002 machine.go:97] duration metric: took 1.350797591s to provisionDockerMachine
	I0731 23:26:43.387744 1217002 start.go:293] postStartSetup for "test-preload-931367" (driver="kvm2")
	I0731 23:26:43.387774 1217002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:26:43.387803 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:43.388169 1217002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:26:43.388219 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:43.391010 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.391314 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:43.391348 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.391541 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:43.391761 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.391959 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:43.392168 1217002 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa Username:docker}
	I0731 23:26:43.474877 1217002 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:26:43.479235 1217002 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:26:43.479267 1217002 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:26:43.479348 1217002 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:26:43.479419 1217002 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:26:43.479517 1217002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:26:43.489317 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:26:43.514037 1217002 start.go:296] duration metric: took 126.275741ms for postStartSetup
	I0731 23:26:43.514084 1217002 fix.go:56] duration metric: took 20.084131815s for fixHost
	I0731 23:26:43.514107 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:43.516852 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.517134 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:43.517165 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.517308 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:43.517545 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.517749 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.517903 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:43.518064 1217002 main.go:141] libmachine: Using SSH client type: native
	I0731 23:26:43.518241 1217002 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I0731 23:26:43.518252 1217002 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:26:43.624877 1217002 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468403.599028752
	
	I0731 23:26:43.624908 1217002 fix.go:216] guest clock: 1722468403.599028752
	I0731 23:26:43.624919 1217002 fix.go:229] Guest: 2024-07-31 23:26:43.599028752 +0000 UTC Remote: 2024-07-31 23:26:43.514087954 +0000 UTC m=+24.064466291 (delta=84.940798ms)
	I0731 23:26:43.624942 1217002 fix.go:200] guest clock delta is within tolerance: 84.940798ms
	I0731 23:26:43.624950 1217002 start.go:83] releasing machines lock for "test-preload-931367", held for 20.195012904s
	I0731 23:26:43.624995 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:43.625335 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetIP
	I0731 23:26:43.628226 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.628553 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:43.628596 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.628769 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:43.629356 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:43.629521 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:26:43.629604 1217002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:26:43.629648 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:43.629773 1217002 ssh_runner.go:195] Run: cat /version.json
	I0731 23:26:43.629794 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:26:43.632461 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.632493 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.632833 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:43.632864 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.632900 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:43.632917 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:43.633033 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:43.633152 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:26:43.633244 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.633339 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:26:43.633417 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:43.633487 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:26:43.633575 1217002 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa Username:docker}
	I0731 23:26:43.633625 1217002 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa Username:docker}
	I0731 23:26:43.741700 1217002 ssh_runner.go:195] Run: systemctl --version
	I0731 23:26:43.747714 1217002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:26:43.894626 1217002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 23:26:43.900544 1217002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:26:43.900623 1217002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:26:43.916820 1217002 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:26:43.916858 1217002 start.go:495] detecting cgroup driver to use...
	I0731 23:26:43.916936 1217002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:26:43.933214 1217002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:26:43.947607 1217002 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:26:43.947672 1217002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:26:43.962254 1217002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:26:43.977237 1217002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:26:44.094937 1217002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:26:44.245475 1217002 docker.go:233] disabling docker service ...
	I0731 23:26:44.245575 1217002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:26:44.260382 1217002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:26:44.274505 1217002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:26:44.404761 1217002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:26:44.532250 1217002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:26:44.546559 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:26:44.565825 1217002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 23:26:44.565890 1217002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.576879 1217002 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:26:44.576963 1217002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.588236 1217002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.599064 1217002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.610064 1217002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:26:44.621511 1217002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.632729 1217002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.650665 1217002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:26:44.661727 1217002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:26:44.671714 1217002 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 23:26:44.671857 1217002 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 23:26:44.684784 1217002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:26:44.695284 1217002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:26:44.806614 1217002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:26:44.942950 1217002 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:26:44.943035 1217002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:26:44.947986 1217002 start.go:563] Will wait 60s for crictl version
	I0731 23:26:44.948057 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:44.952032 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:26:44.992823 1217002 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 23:26:44.992930 1217002 ssh_runner.go:195] Run: crio --version
	I0731 23:26:45.025346 1217002 ssh_runner.go:195] Run: crio --version
	I0731 23:26:45.058389 1217002 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0731 23:26:45.059837 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetIP
	I0731 23:26:45.062934 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:45.063273 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:26:45.063303 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:26:45.063535 1217002 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 23:26:45.067755 1217002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:26:45.080541 1217002 kubeadm.go:883] updating cluster {Name:test-preload-931367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-931367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:26:45.080668 1217002 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0731 23:26:45.080710 1217002 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:26:45.116656 1217002 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 23:26:45.116743 1217002 ssh_runner.go:195] Run: which lz4
	I0731 23:26:45.120657 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 23:26:45.125014 1217002 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:26:45.125054 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0731 23:26:46.700964 1217002 crio.go:462] duration metric: took 1.580342765s to copy over tarball
	I0731 23:26:46.701074 1217002 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 23:26:49.228441 1217002 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.527333742s)
	I0731 23:26:49.228474 1217002 crio.go:469] duration metric: took 2.527471636s to extract the tarball
	I0731 23:26:49.228487 1217002 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 23:26:49.269112 1217002 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:26:49.312246 1217002 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0731 23:26:49.312276 1217002 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 23:26:49.312339 1217002 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:26:49.312364 1217002 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.312387 1217002 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:49.312404 1217002 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0731 23:26:49.312417 1217002 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:49.312448 1217002 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:49.312497 1217002 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.312720 1217002 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:49.314032 1217002 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 23:26:49.314144 1217002 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:49.314038 1217002 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:49.314201 1217002 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.314038 1217002 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:26:49.314037 1217002 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:49.314042 1217002 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.314062 1217002 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:49.456775 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.480614 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:49.486105 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 23:26:49.510742 1217002 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0731 23:26:49.510807 1217002 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.510876 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.513057 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.525285 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:49.533332 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:49.538021 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:49.575224 1217002 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0731 23:26:49.575278 1217002 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0731 23:26:49.575308 1217002 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0731 23:26:49.575346 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.575355 1217002 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:49.575390 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.575396 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.651561 1217002 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0731 23:26:49.651618 1217002 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.651639 1217002 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0731 23:26:49.651672 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.651680 1217002 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:49.651737 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.670824 1217002 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0731 23:26:49.670876 1217002 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:49.670931 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.673640 1217002 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0731 23:26:49.673705 1217002 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:49.673761 1217002 ssh_runner.go:195] Run: which crictl
	I0731 23:26:49.684813 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.684902 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 23:26:49.684944 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:49.685005 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:49.685045 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.685074 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:49.685115 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:49.805397 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.846911 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:49.846959 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:49.846920 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:49.847002 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 23:26:49.847094 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:49.847131 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0731 23:26:49.907270 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0731 23:26:49.979076 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:26:50.018910 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0731 23:26:50.019004 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0731 23:26:50.019079 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 23:26:50.019149 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0731 23:26:50.019081 1217002 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 23:26:50.019274 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 23:26:50.054820 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0731 23:26:50.054930 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 23:26:50.079440 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 23:26:50.079568 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 23:26:50.138964 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0731 23:26:50.138979 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0731 23:26:50.139036 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0731 23:26:50.139078 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 23:26:50.139123 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 23:26:50.139079 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 23:26:50.138964 1217002 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0731 23:26:50.139183 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0731 23:26:50.139200 1217002 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 23:26:50.139211 1217002 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 23:26:50.139228 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0731 23:26:50.139143 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0731 23:26:50.139244 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0731 23:26:50.152458 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0731 23:26:50.152517 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0731 23:26:50.152585 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0731 23:26:50.152635 1217002 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0731 23:26:50.198311 1217002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:26:52.804178 1217002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.664903312s)
	I0731 23:26:52.804228 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0731 23:26:52.804257 1217002 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.605903611s)
	I0731 23:26:52.804264 1217002 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 23:26:52.804367 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0731 23:26:53.146290 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 23:26:53.146345 1217002 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 23:26:53.146401 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0731 23:26:53.889477 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0731 23:26:53.889525 1217002 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 23:26:53.889595 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0731 23:26:54.339212 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0731 23:26:54.339283 1217002 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 23:26:54.339363 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0731 23:26:56.496342 1217002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.156944764s)
	I0731 23:26:56.496377 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0731 23:26:56.496403 1217002 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 23:26:56.496445 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0731 23:26:57.247903 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0731 23:26:57.247959 1217002 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 23:26:57.248049 1217002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0731 23:26:57.389173 1217002 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0731 23:26:57.389229 1217002 cache_images.go:123] Successfully loaded all cached images
	I0731 23:26:57.389236 1217002 cache_images.go:92] duration metric: took 8.076946164s to LoadCachedImages
	I0731 23:26:57.389248 1217002 kubeadm.go:934] updating node { 192.168.39.221 8443 v1.24.4 crio true true} ...
	I0731 23:26:57.389380 1217002 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-931367 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-931367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:26:57.389465 1217002 ssh_runner.go:195] Run: crio config
	I0731 23:26:57.439628 1217002 cni.go:84] Creating CNI manager for ""
	I0731 23:26:57.439656 1217002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:26:57.439670 1217002 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:26:57.439690 1217002 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-931367 NodeName:test-preload-931367 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:26:57.439853 1217002 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-931367"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.221
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:26:57.439923 1217002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0731 23:26:57.450697 1217002 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:26:57.450812 1217002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:26:57.461249 1217002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0731 23:26:57.478922 1217002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:26:57.496360 1217002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0731 23:26:57.514367 1217002 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I0731 23:26:57.518545 1217002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.221	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:26:57.531561 1217002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:26:57.653001 1217002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:26:57.671634 1217002 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367 for IP: 192.168.39.221
	I0731 23:26:57.671670 1217002 certs.go:194] generating shared ca certs ...
	I0731 23:26:57.671695 1217002 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:26:57.671902 1217002 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 23:26:57.671961 1217002 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 23:26:57.671974 1217002 certs.go:256] generating profile certs ...
	I0731 23:26:57.672136 1217002 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/client.key
	I0731 23:26:57.672225 1217002 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/apiserver.key.1701be47
	I0731 23:26:57.672276 1217002 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/proxy-client.key
	I0731 23:26:57.672458 1217002 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 23:26:57.672501 1217002 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 23:26:57.672526 1217002 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 23:26:57.672561 1217002 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 23:26:57.672597 1217002 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:26:57.672631 1217002 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 23:26:57.672690 1217002 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:26:57.673653 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:26:57.725540 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:26:57.758957 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:26:57.793545 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 23:26:57.821566 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0731 23:26:57.847664 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 23:26:57.880408 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:26:57.906058 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 23:26:57.932519 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 23:26:57.958532 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:26:57.983639 1217002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 23:26:58.008828 1217002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:26:58.031024 1217002 ssh_runner.go:195] Run: openssl version
	I0731 23:26:58.037313 1217002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 23:26:58.049393 1217002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 23:26:58.054297 1217002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:26:58.054392 1217002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 23:26:58.060782 1217002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:26:58.074217 1217002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:26:58.087406 1217002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:26:58.092419 1217002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:26:58.092500 1217002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:26:58.098630 1217002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:26:58.111604 1217002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 23:26:58.124913 1217002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 23:26:58.129745 1217002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:26:58.129846 1217002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 23:26:58.136058 1217002 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 23:26:58.149136 1217002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:26:58.154257 1217002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:26:58.160964 1217002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:26:58.167848 1217002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:26:58.174929 1217002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:26:58.181657 1217002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:26:58.188625 1217002 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:26:58.195347 1217002 kubeadm.go:392] StartCluster: {Name:test-preload-931367 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-931367 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:26:58.195448 1217002 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:26:58.195520 1217002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:26:58.239478 1217002 cri.go:89] found id: ""
	I0731 23:26:58.239549 1217002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:26:58.251644 1217002 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0731 23:26:58.251667 1217002 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0731 23:26:58.251712 1217002 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0731 23:26:58.262503 1217002 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:26:58.262951 1217002 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-931367" does not appear in /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:26:58.263064 1217002 kubeconfig.go:62] /home/jenkins/minikube-integration/19312-1172186/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-931367" cluster setting kubeconfig missing "test-preload-931367" context setting]
	I0731 23:26:58.263355 1217002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:26:58.263967 1217002 kapi.go:59] client config for test-preload-931367: &rest.Config{Host:"https://192.168.39.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:26:58.264676 1217002 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0731 23:26:58.275281 1217002 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.221
	I0731 23:26:58.275319 1217002 kubeadm.go:1160] stopping kube-system containers ...
	I0731 23:26:58.275333 1217002 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0731 23:26:58.275397 1217002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:26:58.311590 1217002 cri.go:89] found id: ""
	I0731 23:26:58.311678 1217002 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0731 23:26:58.329658 1217002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:26:58.340263 1217002 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:26:58.340292 1217002 kubeadm.go:157] found existing configuration files:
	
	I0731 23:26:58.340361 1217002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:26:58.350350 1217002 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:26:58.350431 1217002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:26:58.360847 1217002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:26:58.370662 1217002 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:26:58.370730 1217002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:26:58.381056 1217002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:26:58.390766 1217002 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:26:58.390828 1217002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:26:58.401055 1217002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:26:58.410858 1217002 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:26:58.410941 1217002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:26:58.421192 1217002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 23:26:58.431777 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:26:58.526610 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:26:59.197245 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:26:59.470183 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:26:59.534970 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:26:59.599042 1217002 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:26:59.599144 1217002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:27:00.099528 1217002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:27:00.599378 1217002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:27:00.619891 1217002 api_server.go:72] duration metric: took 1.020846541s to wait for apiserver process to appear ...
	I0731 23:27:00.619929 1217002 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:27:00.619957 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:00.620553 1217002 api_server.go:269] stopped: https://192.168.39.221:8443/healthz: Get "https://192.168.39.221:8443/healthz": dial tcp 192.168.39.221:8443: connect: connection refused
	I0731 23:27:01.120336 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:01.121002 1217002 api_server.go:269] stopped: https://192.168.39.221:8443/healthz: Get "https://192.168.39.221:8443/healthz": dial tcp 192.168.39.221:8443: connect: connection refused
	I0731 23:27:01.620666 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:04.836940 1217002 api_server.go:279] https://192.168.39.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 23:27:04.836979 1217002 api_server.go:103] status: https://192.168.39.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 23:27:04.836999 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:04.919263 1217002 api_server.go:279] https://192.168.39.221:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0731 23:27:04.919297 1217002 api_server.go:103] status: https://192.168.39.221:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0731 23:27:05.120721 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:05.126903 1217002 api_server.go:279] https://192.168.39.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:27:05.126939 1217002 api_server.go:103] status: https://192.168.39.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:27:05.620530 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:05.630690 1217002 api_server.go:279] https://192.168.39.221:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0731 23:27:05.630731 1217002 api_server.go:103] status: https://192.168.39.221:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0731 23:27:06.120215 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:06.125720 1217002 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I0731 23:27:06.132698 1217002 api_server.go:141] control plane version: v1.24.4
	I0731 23:27:06.132738 1217002 api_server.go:131] duration metric: took 5.512800937s to wait for apiserver health ...
	I0731 23:27:06.132748 1217002 cni.go:84] Creating CNI manager for ""
	I0731 23:27:06.132755 1217002 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:27:06.134555 1217002 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0731 23:27:06.136022 1217002 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0731 23:27:06.147078 1217002 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0731 23:27:06.166164 1217002 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:27:06.176165 1217002 system_pods.go:59] 8 kube-system pods found
	I0731 23:27:06.176200 1217002 system_pods.go:61] "coredns-6d4b75cb6d-2sg7w" [85124a02-807e-4e2e-be4c-d3863f54060d] Running
	I0731 23:27:06.176205 1217002 system_pods.go:61] "coredns-6d4b75cb6d-l8j76" [3c1b5571-29ba-481b-86b0-71867be2cfaf] Running
	I0731 23:27:06.176212 1217002 system_pods.go:61] "etcd-test-preload-931367" [4768864e-8eff-406f-93f0-1d76272bb6c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 23:27:06.176218 1217002 system_pods.go:61] "kube-apiserver-test-preload-931367" [0358afad-a7c2-4474-bbc8-6526553272fe] Running
	I0731 23:27:06.176224 1217002 system_pods.go:61] "kube-controller-manager-test-preload-931367" [74fee37c-ff36-4c49-8b8f-8e0ea87cd8b4] Running
	I0731 23:27:06.176229 1217002 system_pods.go:61] "kube-proxy-b798z" [83438a43-2794-47a3-b7e4-561184a83d75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0731 23:27:06.176235 1217002 system_pods.go:61] "kube-scheduler-test-preload-931367" [73f454c2-e25f-4a8f-90c2-f7518fb7ecb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 23:27:06.176240 1217002 system_pods.go:61] "storage-provisioner" [7eed4d52-536c-45ba-b906-d5dc6e591454] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:27:06.176250 1217002 system_pods.go:74] duration metric: took 10.052316ms to wait for pod list to return data ...
	I0731 23:27:06.176261 1217002 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:27:06.180025 1217002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:27:06.180055 1217002 node_conditions.go:123] node cpu capacity is 2
	I0731 23:27:06.180067 1217002 node_conditions.go:105] duration metric: took 3.800116ms to run NodePressure ...
	I0731 23:27:06.180084 1217002 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0731 23:27:06.452185 1217002 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0731 23:27:06.458635 1217002 kubeadm.go:739] kubelet initialised
	I0731 23:27:06.458663 1217002 kubeadm.go:740] duration metric: took 6.447801ms waiting for restarted kubelet to initialise ...
	I0731 23:27:06.458672 1217002 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:27:06.465178 1217002 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-2sg7w" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:06.476586 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "coredns-6d4b75cb6d-2sg7w" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.476619 1217002 pod_ready.go:81] duration metric: took 11.403073ms for pod "coredns-6d4b75cb6d-2sg7w" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:06.476632 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "coredns-6d4b75cb6d-2sg7w" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.476641 1217002 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:06.487553 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.487586 1217002 pod_ready.go:81] duration metric: took 10.931284ms for pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:06.487597 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.487604 1217002 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:06.494461 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "etcd-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.494493 1217002 pod_ready.go:81] duration metric: took 6.878075ms for pod "etcd-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:06.494507 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "etcd-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.494515 1217002 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:06.570827 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "kube-apiserver-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.570871 1217002 pod_ready.go:81] duration metric: took 76.341531ms for pod "kube-apiserver-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:06.570884 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "kube-apiserver-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.570893 1217002 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:06.970109 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.970153 1217002 pod_ready.go:81] duration metric: took 399.246867ms for pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:06.970168 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:06.970179 1217002 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-b798z" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:07.370975 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "kube-proxy-b798z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:07.371014 1217002 pod_ready.go:81] duration metric: took 400.822739ms for pod "kube-proxy-b798z" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:07.371025 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "kube-proxy-b798z" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:07.371032 1217002 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:07.770184 1217002 pod_ready.go:97] node "test-preload-931367" hosting pod "kube-scheduler-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:07.770226 1217002 pod_ready.go:81] duration metric: took 399.186601ms for pod "kube-scheduler-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	E0731 23:27:07.770241 1217002 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-931367" hosting pod "kube-scheduler-test-preload-931367" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:07.770250 1217002 pod_ready.go:38] duration metric: took 1.31156843s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:27:07.770275 1217002 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:27:07.783851 1217002 ops.go:34] apiserver oom_adj: -16
	I0731 23:27:07.783887 1217002 kubeadm.go:597] duration metric: took 9.53221295s to restartPrimaryControlPlane
	I0731 23:27:07.783901 1217002 kubeadm.go:394] duration metric: took 9.588562685s to StartCluster
	I0731 23:27:07.783946 1217002 settings.go:142] acquiring lock: {Name:mk076897bfd1af81579aafbccfd5a932e011b343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:27:07.784040 1217002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:27:07.784798 1217002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:27:07.785055 1217002 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:27:07.785133 1217002 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:27:07.785207 1217002 addons.go:69] Setting storage-provisioner=true in profile "test-preload-931367"
	I0731 23:27:07.785222 1217002 addons.go:69] Setting default-storageclass=true in profile "test-preload-931367"
	I0731 23:27:07.785242 1217002 addons.go:234] Setting addon storage-provisioner=true in "test-preload-931367"
	W0731 23:27:07.785251 1217002 addons.go:243] addon storage-provisioner should already be in state true
	I0731 23:27:07.785252 1217002 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-931367"
	I0731 23:27:07.785282 1217002 host.go:66] Checking if "test-preload-931367" exists ...
	I0731 23:27:07.785304 1217002 config.go:182] Loaded profile config "test-preload-931367": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0731 23:27:07.785658 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:27:07.785669 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:27:07.785713 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:27:07.785818 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:27:07.786891 1217002 out.go:177] * Verifying Kubernetes components...
	I0731 23:27:07.788353 1217002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:27:07.802098 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38775
	I0731 23:27:07.802587 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:27:07.803141 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:27:07.803167 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:27:07.803529 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:27:07.803732 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetState
	I0731 23:27:07.803869 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
	I0731 23:27:07.804375 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:27:07.804893 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:27:07.804917 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:27:07.805283 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:27:07.805802 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:27:07.805836 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:27:07.806862 1217002 kapi.go:59] client config for test-preload-931367: &rest.Config{Host:"https://192.168.39.221:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/client.crt", KeyFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/test-preload-931367/client.key", CAFile:"/home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d035c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0731 23:27:07.807240 1217002 addons.go:234] Setting addon default-storageclass=true in "test-preload-931367"
	W0731 23:27:07.807260 1217002 addons.go:243] addon default-storageclass should already be in state true
	I0731 23:27:07.807290 1217002 host.go:66] Checking if "test-preload-931367" exists ...
	I0731 23:27:07.807687 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:27:07.807722 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:27:07.822295 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46339
	I0731 23:27:07.822799 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:27:07.823386 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:27:07.823406 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:27:07.823436 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I0731 23:27:07.823849 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:27:07.823858 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:27:07.824063 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetState
	I0731 23:27:07.824358 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:27:07.824379 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:27:07.824730 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:27:07.825300 1217002 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:27:07.825354 1217002 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:27:07.827121 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:27:07.829319 1217002 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:27:07.830888 1217002 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:27:07.830913 1217002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 23:27:07.830935 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:27:07.834768 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:27:07.835347 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:27:07.835390 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:27:07.835633 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:27:07.835884 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:27:07.836084 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:27:07.836277 1217002 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa Username:docker}
	I0731 23:27:07.842985 1217002 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I0731 23:27:07.843580 1217002 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:27:07.844160 1217002 main.go:141] libmachine: Using API Version  1
	I0731 23:27:07.844184 1217002 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:27:07.844581 1217002 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:27:07.844764 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetState
	I0731 23:27:07.846463 1217002 main.go:141] libmachine: (test-preload-931367) Calling .DriverName
	I0731 23:27:07.846735 1217002 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 23:27:07.846750 1217002 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 23:27:07.846773 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHHostname
	I0731 23:27:07.850363 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:27:07.850836 1217002 main.go:141] libmachine: (test-preload-931367) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:fd:84", ip: ""} in network mk-test-preload-931367: {Iface:virbr1 ExpiryTime:2024-08-01 00:26:34 +0000 UTC Type:0 Mac:52:54:00:d2:fd:84 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:test-preload-931367 Clientid:01:52:54:00:d2:fd:84}
	I0731 23:27:07.850861 1217002 main.go:141] libmachine: (test-preload-931367) DBG | domain test-preload-931367 has defined IP address 192.168.39.221 and MAC address 52:54:00:d2:fd:84 in network mk-test-preload-931367
	I0731 23:27:07.851210 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHPort
	I0731 23:27:07.851474 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHKeyPath
	I0731 23:27:07.851724 1217002 main.go:141] libmachine: (test-preload-931367) Calling .GetSSHUsername
	I0731 23:27:07.851911 1217002 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/test-preload-931367/id_rsa Username:docker}
	I0731 23:27:07.961873 1217002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:27:07.979757 1217002 node_ready.go:35] waiting up to 6m0s for node "test-preload-931367" to be "Ready" ...
	I0731 23:27:08.046951 1217002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:27:08.060479 1217002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 23:27:09.070610 1217002 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.023618149s)
	I0731 23:27:09.070678 1217002 main.go:141] libmachine: Making call to close driver server
	I0731 23:27:09.070694 1217002 main.go:141] libmachine: (test-preload-931367) Calling .Close
	I0731 23:27:09.070689 1217002 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.010166717s)
	I0731 23:27:09.070733 1217002 main.go:141] libmachine: Making call to close driver server
	I0731 23:27:09.070750 1217002 main.go:141] libmachine: (test-preload-931367) Calling .Close
	I0731 23:27:09.071034 1217002 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:27:09.071051 1217002 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:27:09.071066 1217002 main.go:141] libmachine: Making call to close driver server
	I0731 23:27:09.071074 1217002 main.go:141] libmachine: (test-preload-931367) Calling .Close
	I0731 23:27:09.071083 1217002 main.go:141] libmachine: (test-preload-931367) DBG | Closing plugin on server side
	I0731 23:27:09.071035 1217002 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:27:09.071106 1217002 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:27:09.071129 1217002 main.go:141] libmachine: Making call to close driver server
	I0731 23:27:09.071137 1217002 main.go:141] libmachine: (test-preload-931367) Calling .Close
	I0731 23:27:09.071341 1217002 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:27:09.071355 1217002 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:27:09.071418 1217002 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:27:09.071427 1217002 main.go:141] libmachine: (test-preload-931367) DBG | Closing plugin on server side
	I0731 23:27:09.071429 1217002 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:27:09.078673 1217002 main.go:141] libmachine: Making call to close driver server
	I0731 23:27:09.078696 1217002 main.go:141] libmachine: (test-preload-931367) Calling .Close
	I0731 23:27:09.079021 1217002 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:27:09.079045 1217002 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:27:09.079073 1217002 main.go:141] libmachine: (test-preload-931367) DBG | Closing plugin on server side
	I0731 23:27:09.080839 1217002 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 23:27:09.082120 1217002 addons.go:510] duration metric: took 1.296988177s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 23:27:09.983460 1217002 node_ready.go:53] node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:11.984951 1217002 node_ready.go:53] node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:14.484314 1217002 node_ready.go:53] node "test-preload-931367" has status "Ready":"False"
	I0731 23:27:15.483679 1217002 node_ready.go:49] node "test-preload-931367" has status "Ready":"True"
	I0731 23:27:15.483708 1217002 node_ready.go:38] duration metric: took 7.503912074s for node "test-preload-931367" to be "Ready" ...
	I0731 23:27:15.483718 1217002 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:27:15.489769 1217002 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:15.498427 1217002 pod_ready.go:92] pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace has status "Ready":"True"
	I0731 23:27:15.498454 1217002 pod_ready.go:81] duration metric: took 8.65192ms for pod "coredns-6d4b75cb6d-l8j76" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:15.498465 1217002 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:16.010416 1217002 pod_ready.go:92] pod "etcd-test-preload-931367" in "kube-system" namespace has status "Ready":"True"
	I0731 23:27:16.010454 1217002 pod_ready.go:81] duration metric: took 511.980945ms for pod "etcd-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:16.010468 1217002 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:18.019539 1217002 pod_ready.go:102] pod "kube-apiserver-test-preload-931367" in "kube-system" namespace has status "Ready":"False"
	I0731 23:27:19.018906 1217002 pod_ready.go:92] pod "kube-apiserver-test-preload-931367" in "kube-system" namespace has status "Ready":"True"
	I0731 23:27:19.018943 1217002 pod_ready.go:81] duration metric: took 3.008466641s for pod "kube-apiserver-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:19.018959 1217002 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:19.026708 1217002 pod_ready.go:92] pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace has status "Ready":"True"
	I0731 23:27:19.026738 1217002 pod_ready.go:81] duration metric: took 7.771983ms for pod "kube-controller-manager-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:19.026755 1217002 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b798z" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:19.032814 1217002 pod_ready.go:92] pod "kube-proxy-b798z" in "kube-system" namespace has status "Ready":"True"
	I0731 23:27:19.032841 1217002 pod_ready.go:81] duration metric: took 6.07873ms for pod "kube-proxy-b798z" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:19.032855 1217002 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:20.041204 1217002 pod_ready.go:92] pod "kube-scheduler-test-preload-931367" in "kube-system" namespace has status "Ready":"True"
	I0731 23:27:20.041231 1217002 pod_ready.go:81] duration metric: took 1.008367927s for pod "kube-scheduler-test-preload-931367" in "kube-system" namespace to be "Ready" ...
	I0731 23:27:20.041242 1217002 pod_ready.go:38] duration metric: took 4.557513934s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:27:20.041259 1217002 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:27:20.041313 1217002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:27:20.056292 1217002 api_server.go:72] duration metric: took 12.271203901s to wait for apiserver process to appear ...
	I0731 23:27:20.056327 1217002 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:27:20.056354 1217002 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I0731 23:27:20.063160 1217002 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I0731 23:27:20.064629 1217002 api_server.go:141] control plane version: v1.24.4
	I0731 23:27:20.064665 1217002 api_server.go:131] duration metric: took 8.330283ms to wait for apiserver health ...
	I0731 23:27:20.064692 1217002 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:27:20.086482 1217002 system_pods.go:59] 7 kube-system pods found
	I0731 23:27:20.086515 1217002 system_pods.go:61] "coredns-6d4b75cb6d-l8j76" [3c1b5571-29ba-481b-86b0-71867be2cfaf] Running
	I0731 23:27:20.086520 1217002 system_pods.go:61] "etcd-test-preload-931367" [4768864e-8eff-406f-93f0-1d76272bb6c6] Running
	I0731 23:27:20.086524 1217002 system_pods.go:61] "kube-apiserver-test-preload-931367" [0358afad-a7c2-4474-bbc8-6526553272fe] Running
	I0731 23:27:20.086527 1217002 system_pods.go:61] "kube-controller-manager-test-preload-931367" [74fee37c-ff36-4c49-8b8f-8e0ea87cd8b4] Running
	I0731 23:27:20.086530 1217002 system_pods.go:61] "kube-proxy-b798z" [83438a43-2794-47a3-b7e4-561184a83d75] Running
	I0731 23:27:20.086533 1217002 system_pods.go:61] "kube-scheduler-test-preload-931367" [73f454c2-e25f-4a8f-90c2-f7518fb7ecb2] Running
	I0731 23:27:20.086539 1217002 system_pods.go:61] "storage-provisioner" [7eed4d52-536c-45ba-b906-d5dc6e591454] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:27:20.086545 1217002 system_pods.go:74] duration metric: took 21.847449ms to wait for pod list to return data ...
	I0731 23:27:20.086556 1217002 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:27:20.283770 1217002 default_sa.go:45] found service account: "default"
	I0731 23:27:20.283806 1217002 default_sa.go:55] duration metric: took 197.241944ms for default service account to be created ...
	I0731 23:27:20.283816 1217002 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:27:20.487685 1217002 system_pods.go:86] 7 kube-system pods found
	I0731 23:27:20.487728 1217002 system_pods.go:89] "coredns-6d4b75cb6d-l8j76" [3c1b5571-29ba-481b-86b0-71867be2cfaf] Running
	I0731 23:27:20.487736 1217002 system_pods.go:89] "etcd-test-preload-931367" [4768864e-8eff-406f-93f0-1d76272bb6c6] Running
	I0731 23:27:20.487743 1217002 system_pods.go:89] "kube-apiserver-test-preload-931367" [0358afad-a7c2-4474-bbc8-6526553272fe] Running
	I0731 23:27:20.487749 1217002 system_pods.go:89] "kube-controller-manager-test-preload-931367" [74fee37c-ff36-4c49-8b8f-8e0ea87cd8b4] Running
	I0731 23:27:20.487754 1217002 system_pods.go:89] "kube-proxy-b798z" [83438a43-2794-47a3-b7e4-561184a83d75] Running
	I0731 23:27:20.487758 1217002 system_pods.go:89] "kube-scheduler-test-preload-931367" [73f454c2-e25f-4a8f-90c2-f7518fb7ecb2] Running
	I0731 23:27:20.487770 1217002 system_pods.go:89] "storage-provisioner" [7eed4d52-536c-45ba-b906-d5dc6e591454] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0731 23:27:20.487791 1217002 system_pods.go:126] duration metric: took 203.969016ms to wait for k8s-apps to be running ...
	I0731 23:27:20.487809 1217002 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:27:20.487870 1217002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:27:20.502940 1217002 system_svc.go:56] duration metric: took 15.115985ms WaitForService to wait for kubelet
	I0731 23:27:20.502986 1217002 kubeadm.go:582] duration metric: took 12.717903349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:27:20.503015 1217002 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:27:20.685534 1217002 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:27:20.685569 1217002 node_conditions.go:123] node cpu capacity is 2
	I0731 23:27:20.685581 1217002 node_conditions.go:105] duration metric: took 182.559611ms to run NodePressure ...
	I0731 23:27:20.685597 1217002 start.go:241] waiting for startup goroutines ...
	I0731 23:27:20.685607 1217002 start.go:246] waiting for cluster config update ...
	I0731 23:27:20.685621 1217002 start.go:255] writing updated cluster config ...
	I0731 23:27:20.685960 1217002 ssh_runner.go:195] Run: rm -f paused
	I0731 23:27:20.737216 1217002 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0731 23:27:20.739291 1217002 out.go:177] 
	W0731 23:27:20.740625 1217002 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0731 23:27:20.741837 1217002 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0731 23:27:20.743199 1217002 out.go:177] * Done! kubectl is now configured to use "test-preload-931367" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.711993082Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2eb4124b-5442-48c1-87b4-c0abb71cf97e name=/runtime.v1.RuntimeService/Version
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.712896482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2279e2ac-82e7-443f-95f6-e1547c8ecfb9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.713386141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468441713365143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2279e2ac-82e7-443f-95f6-e1547c8ecfb9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.713919621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64b83ff0-44a4-4fe1-bc25-951a2e71c347 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.714009454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64b83ff0-44a4-4fe1-bc25-951a2e71c347 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.714166629Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4b48e25290d9f9d150536272ba5790dcbbb42ee3b9a4b3c0fa6728046c5b03b5,PodSandboxId:77f19bddd425287806aa20a3465c07fffc2e0d234bed5a1cb8941d62644a6727,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722468433829526038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l8j76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1b5571-29ba-481b-86b0-71867be2cfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 936e7f9c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8,PodSandboxId:5b1cb330a2433c6eb5125111b708837a57fa00f1f41ed68b229c9d87129db7f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468426746460349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7eed4d52-536c-45ba-b906-d5dc6e591454,},Annotations:map[string]string{io.kubernetes.container.hash: fc8d7336,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293b9e8f1963fca2261f49f02b19ce2581b0c798f4451827f6d7456e82bf023c,PodSandboxId:aa010ad5d617ffe781b16213dcb4cb55bfdeae665519bdb499ff3a9e3635c746,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722468426600265514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b798z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 834
38a43-2794-47a3-b7e4-561184a83d75,},Annotations:map[string]string{io.kubernetes.container.hash: 76ecfce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9da4ef7e9a41a3c6985a556cd32f369c49059b301ac45c462d3c1aa7649d5c,PodSandboxId:f427c4a3262313b686f83042eb1c880b17e3d5a9014873c82085c739183c8a17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722468420364003126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeead75f0
26c8dd3656835f85f78a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3174e5ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a44e3343874cbcd2346936faefe8d1ff1054b69f73f52c1cf827707a1a9972e,PodSandboxId:73ea49b4c0cf75b0717cf853bea06ca93427034f79e9287c0202db444536da5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722468420319476548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ecca3d1501b861b4a09
6e39ab0dfce,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf1a11705f9f2a963ddc576c4503909481c210a745987ea24fd73cbf8a3488e,PodSandboxId:bf02c9b5ebbbf63277df425ba61743b17b1f9972a01c29a2837d574d159e2a76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722468420299062112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b6946d5ef88997034053882172456b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3ce7459f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872fc2b3ffe7257927c3fc4d0afe4d897892ff6baccd6b0dc5b83b42deb6b4c9,PodSandboxId:9e7ab1695baff6677a7800ee5a91a882fc65937b6151dcad6b155ca4caff5f91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722468420302086701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79232c6eb6dfeaf49236e5d65c515320,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64b83ff0-44a4-4fe1-bc25-951a2e71c347 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.750532437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8980f29-b323-4f10-83fa-538e4c42e62b name=/runtime.v1.RuntimeService/Version
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.750605879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8980f29-b323-4f10-83fa-538e4c42e62b name=/runtime.v1.RuntimeService/Version
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.751739688Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=448e19b7-acd8-4b8a-8ee7-dcbd77d97114 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.752230940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468441752157448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=448e19b7-acd8-4b8a-8ee7-dcbd77d97114 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.752724848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ebb6b08f-09f4-4567-948e-0158b8d5c823 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.752774233Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ebb6b08f-09f4-4567-948e-0158b8d5c823 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.752929909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4b48e25290d9f9d150536272ba5790dcbbb42ee3b9a4b3c0fa6728046c5b03b5,PodSandboxId:77f19bddd425287806aa20a3465c07fffc2e0d234bed5a1cb8941d62644a6727,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722468433829526038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l8j76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1b5571-29ba-481b-86b0-71867be2cfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 936e7f9c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8,PodSandboxId:5b1cb330a2433c6eb5125111b708837a57fa00f1f41ed68b229c9d87129db7f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468426746460349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7eed4d52-536c-45ba-b906-d5dc6e591454,},Annotations:map[string]string{io.kubernetes.container.hash: fc8d7336,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293b9e8f1963fca2261f49f02b19ce2581b0c798f4451827f6d7456e82bf023c,PodSandboxId:aa010ad5d617ffe781b16213dcb4cb55bfdeae665519bdb499ff3a9e3635c746,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722468426600265514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b798z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 834
38a43-2794-47a3-b7e4-561184a83d75,},Annotations:map[string]string{io.kubernetes.container.hash: 76ecfce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9da4ef7e9a41a3c6985a556cd32f369c49059b301ac45c462d3c1aa7649d5c,PodSandboxId:f427c4a3262313b686f83042eb1c880b17e3d5a9014873c82085c739183c8a17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722468420364003126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeead75f0
26c8dd3656835f85f78a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3174e5ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a44e3343874cbcd2346936faefe8d1ff1054b69f73f52c1cf827707a1a9972e,PodSandboxId:73ea49b4c0cf75b0717cf853bea06ca93427034f79e9287c0202db444536da5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722468420319476548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ecca3d1501b861b4a09
6e39ab0dfce,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf1a11705f9f2a963ddc576c4503909481c210a745987ea24fd73cbf8a3488e,PodSandboxId:bf02c9b5ebbbf63277df425ba61743b17b1f9972a01c29a2837d574d159e2a76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722468420299062112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b6946d5ef88997034053882172456b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3ce7459f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872fc2b3ffe7257927c3fc4d0afe4d897892ff6baccd6b0dc5b83b42deb6b4c9,PodSandboxId:9e7ab1695baff6677a7800ee5a91a882fc65937b6151dcad6b155ca4caff5f91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722468420302086701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79232c6eb6dfeaf49236e5d65c515320,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ebb6b08f-09f4-4567-948e-0158b8d5c823 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.786066929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5a212ab-e82c-403c-be9d-c4a26c4f8767 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.786139704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5a212ab-e82c-403c-be9d-c4a26c4f8767 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.787524037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1713dd6c-ebff-4b2f-9370-bbac752b95d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.787990774Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468441787966871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1713dd6c-ebff-4b2f-9370-bbac752b95d6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.788766937Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=b62d150b-9d87-4c74-bebb-a1a52e05c44e name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.788948157Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:77f19bddd425287806aa20a3465c07fffc2e0d234bed5a1cb8941d62644a6727,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-l8j76,Uid:3c1b5571-29ba-481b-86b0-71867be2cfaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722468433604853489,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-l8j76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1b5571-29ba-481b-86b0-71867be2cfaf,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:27:05.608407170Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa010ad5d617ffe781b16213dcb4cb55bfdeae665519bdb499ff3a9e3635c746,Metadata:&PodSandboxMetadata{Name:kube-proxy-b798z,Uid:83438a43-2794-47a3-b7e4-561184a83d75,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1722468426516805615,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-b798z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83438a43-2794-47a3-b7e4-561184a83d75,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-31T23:27:05.608402510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b1cb330a2433c6eb5125111b708837a57fa00f1f41ed68b229c9d87129db7f8,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:7eed4d52-536c-45ba-b906-d5dc6e591454,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722468426232681674,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7eed4d52-536c-45ba-b906-d5dc
6e591454,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-31T23:27:05.608404744Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9e7ab1695baff6677a7800ee5a91a882fc65937b6151dcad6b155ca4caff5f91,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-931367,Ui
d:79232c6eb6dfeaf49236e5d65c515320,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722468420136948341,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79232c6eb6dfeaf49236e5d65c515320,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 79232c6eb6dfeaf49236e5d65c515320,kubernetes.io/config.seen: 2024-07-31T23:26:59.597649801Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f427c4a3262313b686f83042eb1c880b17e3d5a9014873c82085c739183c8a17,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-931367,Uid:8aeead75f026c8dd3656835f85f78a1d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722468420130750863,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-931367,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeead75f026c8dd3656835f85f78a1d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.221:8443,kubernetes.io/config.hash: 8aeead75f026c8dd3656835f85f78a1d,kubernetes.io/config.seen: 2024-07-31T23:26:59.597648499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:73ea49b4c0cf75b0717cf853bea06ca93427034f79e9287c0202db444536da5c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-931367,Uid:74ecca3d1501b861b4a096e39ab0dfce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722468420127486718,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ecca3d1501b861b4a096e39ab0dfce,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 74ecca3d1501b861b4a096e39ab0dfce,kub
ernetes.io/config.seen: 2024-07-31T23:26:59.597630142Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf02c9b5ebbbf63277df425ba61743b17b1f9972a01c29a2837d574d159e2a76,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-931367,Uid:e2b6946d5ef88997034053882172456b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722468420117593108,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b6946d5ef88997034053882172456b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.221:2379,kubernetes.io/config.hash: e2b6946d5ef88997034053882172456b,kubernetes.io/config.seen: 2024-07-31T23:26:59.597646926Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b62d150b-9d87-4c74-bebb-a1a52e05c44e name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.789539241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf0198d0-85d2-4335-af58-564a3160ef14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.789614521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf0198d0-85d2-4335-af58-564a3160ef14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.789774809Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4b48e25290d9f9d150536272ba5790dcbbb42ee3b9a4b3c0fa6728046c5b03b5,PodSandboxId:77f19bddd425287806aa20a3465c07fffc2e0d234bed5a1cb8941d62644a6727,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722468433829526038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l8j76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1b5571-29ba-481b-86b0-71867be2cfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 936e7f9c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8,PodSandboxId:5b1cb330a2433c6eb5125111b708837a57fa00f1f41ed68b229c9d87129db7f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468426746460349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7eed4d52-536c-45ba-b906-d5dc6e591454,},Annotations:map[string]string{io.kubernetes.container.hash: fc8d7336,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293b9e8f1963fca2261f49f02b19ce2581b0c798f4451827f6d7456e82bf023c,PodSandboxId:aa010ad5d617ffe781b16213dcb4cb55bfdeae665519bdb499ff3a9e3635c746,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722468426600265514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b798z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 834
38a43-2794-47a3-b7e4-561184a83d75,},Annotations:map[string]string{io.kubernetes.container.hash: 76ecfce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9da4ef7e9a41a3c6985a556cd32f369c49059b301ac45c462d3c1aa7649d5c,PodSandboxId:f427c4a3262313b686f83042eb1c880b17e3d5a9014873c82085c739183c8a17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722468420364003126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeead75f0
26c8dd3656835f85f78a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3174e5ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a44e3343874cbcd2346936faefe8d1ff1054b69f73f52c1cf827707a1a9972e,PodSandboxId:73ea49b4c0cf75b0717cf853bea06ca93427034f79e9287c0202db444536da5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722468420319476548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ecca3d1501b861b4a09
6e39ab0dfce,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf1a11705f9f2a963ddc576c4503909481c210a745987ea24fd73cbf8a3488e,PodSandboxId:bf02c9b5ebbbf63277df425ba61743b17b1f9972a01c29a2837d574d159e2a76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722468420299062112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b6946d5ef88997034053882172456b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3ce7459f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872fc2b3ffe7257927c3fc4d0afe4d897892ff6baccd6b0dc5b83b42deb6b4c9,PodSandboxId:9e7ab1695baff6677a7800ee5a91a882fc65937b6151dcad6b155ca4caff5f91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722468420302086701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79232c6eb6dfeaf49236e5d65c515320,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf0198d0-85d2-4335-af58-564a3160ef14 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.790054898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3097d8f3-d7e7-4218-aac0-065daa007f89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.790106953Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3097d8f3-d7e7-4218-aac0-065daa007f89 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:27:21 test-preload-931367 crio[691]: time="2024-07-31 23:27:21.790319503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4b48e25290d9f9d150536272ba5790dcbbb42ee3b9a4b3c0fa6728046c5b03b5,PodSandboxId:77f19bddd425287806aa20a3465c07fffc2e0d234bed5a1cb8941d62644a6727,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722468433829526038,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-l8j76,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1b5571-29ba-481b-86b0-71867be2cfaf,},Annotations:map[string]string{io.kubernetes.container.hash: 936e7f9c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8,PodSandboxId:5b1cb330a2433c6eb5125111b708837a57fa00f1f41ed68b229c9d87129db7f8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468426746460349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-
system,io.kubernetes.pod.uid: 7eed4d52-536c-45ba-b906-d5dc6e591454,},Annotations:map[string]string{io.kubernetes.container.hash: fc8d7336,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293b9e8f1963fca2261f49f02b19ce2581b0c798f4451827f6d7456e82bf023c,PodSandboxId:aa010ad5d617ffe781b16213dcb4cb55bfdeae665519bdb499ff3a9e3635c746,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722468426600265514,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b798z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 834
38a43-2794-47a3-b7e4-561184a83d75,},Annotations:map[string]string{io.kubernetes.container.hash: 76ecfce4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f9da4ef7e9a41a3c6985a556cd32f369c49059b301ac45c462d3c1aa7649d5c,PodSandboxId:f427c4a3262313b686f83042eb1c880b17e3d5a9014873c82085c739183c8a17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722468420364003126,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aeead75f0
26c8dd3656835f85f78a1d,},Annotations:map[string]string{io.kubernetes.container.hash: 3174e5ff,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a44e3343874cbcd2346936faefe8d1ff1054b69f73f52c1cf827707a1a9972e,PodSandboxId:73ea49b4c0cf75b0717cf853bea06ca93427034f79e9287c0202db444536da5c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722468420319476548,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74ecca3d1501b861b4a09
6e39ab0dfce,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cf1a11705f9f2a963ddc576c4503909481c210a745987ea24fd73cbf8a3488e,PodSandboxId:bf02c9b5ebbbf63277df425ba61743b17b1f9972a01c29a2837d574d159e2a76,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722468420299062112,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2b6946d5ef88997034053882172456b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 3ce7459f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:872fc2b3ffe7257927c3fc4d0afe4d897892ff6baccd6b0dc5b83b42deb6b4c9,PodSandboxId:9e7ab1695baff6677a7800ee5a91a882fc65937b6151dcad6b155ca4caff5f91,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722468420302086701,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-931367,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79232c6eb6dfeaf49236e5d65c515320,},Annotations
:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3097d8f3-d7e7-4218-aac0-065daa007f89 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4b48e25290d9f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   77f19bddd4252       coredns-6d4b75cb6d-l8j76
	3f1b4aa846132       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Exited              storage-provisioner       2                   5b1cb330a2433       storage-provisioner
	293b9e8f1963f       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   aa010ad5d617f       kube-proxy-b798z
	4f9da4ef7e9a4       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   f427c4a326231       kube-apiserver-test-preload-931367
	5a44e3343874c       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   73ea49b4c0cf7       kube-scheduler-test-preload-931367
	872fc2b3ffe72       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   9e7ab1695baff       kube-controller-manager-test-preload-931367
	1cf1a11705f9f       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   bf02c9b5ebbbf       etcd-test-preload-931367
	
	
	==> coredns [4b48e25290d9f9d150536272ba5790dcbbb42ee3b9a4b3c0fa6728046c5b03b5] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:51272 - 9537 "HINFO IN 3240395548887689952.517184037174731505. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020749967s
	
	
	==> describe nodes <==
	Name:               test-preload-931367
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-931367
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=test-preload-931367
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_25_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:25:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-931367
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:27:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:27:15 +0000   Wed, 31 Jul 2024 23:25:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:27:15 +0000   Wed, 31 Jul 2024 23:25:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:27:15 +0000   Wed, 31 Jul 2024 23:25:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:27:15 +0000   Wed, 31 Jul 2024 23:27:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.221
	  Hostname:    test-preload-931367
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f79e732d8cdc438db9a1f3467fd72e1f
	  System UUID:                f79e732d-8cdc-438d-b9a1-f3467fd72e1f
	  Boot ID:                    2f78c49e-9923-43c9-8399-3e1cbb6169d4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-l8j76                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     78s
	  kube-system                 etcd-test-preload-931367                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         90s
	  kube-system                 kube-apiserver-test-preload-931367             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-test-preload-931367    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-proxy-b798z                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-test-preload-931367             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s (x5 over 99s)  kubelet          Node test-preload-931367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     98s (x4 over 99s)  kubelet          Node test-preload-931367 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    98s (x5 over 99s)  kubelet          Node test-preload-931367 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     90s                kubelet          Node test-preload-931367 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node test-preload-931367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node test-preload-931367 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                80s                kubelet          Node test-preload-931367 status is now: NodeReady
	  Normal  RegisteredNode           78s                node-controller  Node test-preload-931367 event: Registered Node test-preload-931367 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node test-preload-931367 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node test-preload-931367 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node test-preload-931367 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-931367 event: Registered Node test-preload-931367 in Controller
	
	
	==> dmesg <==
	[Jul31 23:26] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.047806] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037217] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.769232] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.032978] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +2.363114] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.976837] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.057807] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062622] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.168085] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.150166] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.270931] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[ +12.850537] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.057211] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.747539] systemd-fstab-generator[1136]: Ignoring "noauto" option for root device
	[Jul31 23:27] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.604806] systemd-fstab-generator[1832]: Ignoring "noauto" option for root device
	[  +5.763577] kauditd_printk_skb: 59 callbacks suppressed
	
	
	==> etcd [1cf1a11705f9f2a963ddc576c4503909481c210a745987ea24fd73cbf8a3488e] <==
	{"level":"info","ts":"2024-07-31T23:27:00.720Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"e7b0d5fc33cf92f8","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T23:27:00.739Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T23:27:00.740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 switched to configuration voters=(16695079097840145144)"}
	{"level":"info","ts":"2024-07-31T23:27:00.740Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c75d0b2482cd9027","local-member-id":"e7b0d5fc33cf92f8","added-peer-id":"e7b0d5fc33cf92f8","added-peer-peer-urls":["https://192.168.39.221:2380"]}
	{"level":"info","ts":"2024-07-31T23:27:00.742Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c75d0b2482cd9027","local-member-id":"e7b0d5fc33cf92f8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:27:00.742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:27:00.746Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:27:00.749Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e7b0d5fc33cf92f8","initial-advertise-peer-urls":["https://192.168.39.221:2380"],"listen-peer-urls":["https://192.168.39.221:2380"],"advertise-client-urls":["https://192.168.39.221:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.221:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:27:00.749Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:27:00.749Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.221:2380"}
	{"level":"info","ts":"2024-07-31T23:27:00.752Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.221:2380"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 received MsgPreVoteResp from e7b0d5fc33cf92f8 at term 2"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 received MsgVoteResp from e7b0d5fc33cf92f8 at term 3"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7b0d5fc33cf92f8 became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:27:02.365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e7b0d5fc33cf92f8 elected leader e7b0d5fc33cf92f8 at term 3"}
	{"level":"info","ts":"2024-07-31T23:27:02.370Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"e7b0d5fc33cf92f8","local-member-attributes":"{Name:test-preload-931367 ClientURLs:[https://192.168.39.221:2379]}","request-path":"/0/members/e7b0d5fc33cf92f8/attributes","cluster-id":"c75d0b2482cd9027","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:27:02.370Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:27:02.371Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:27:02.372Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:27:02.372Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.221:2379"}
	{"level":"info","ts":"2024-07-31T23:27:02.372Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:27:02.372Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:27:22 up 0 min,  0 users,  load average: 1.00, 0.28, 0.10
	Linux test-preload-931367 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4f9da4ef7e9a41a3c6985a556cd32f369c49059b301ac45c462d3c1aa7649d5c] <==
	I0731 23:27:04.807891       1 naming_controller.go:291] Starting NamingConditionController
	I0731 23:27:04.811901       1 establishing_controller.go:76] Starting EstablishingController
	I0731 23:27:04.812045       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0731 23:27:04.812090       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0731 23:27:04.812128       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0731 23:27:04.774942       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0731 23:27:04.918478       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0731 23:27:04.968274       1 cache.go:39] Caches are synced for autoregister controller
	I0731 23:27:04.968497       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:27:04.969001       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0731 23:27:04.968407       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0731 23:27:04.968425       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0731 23:27:04.969233       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:27:04.977382       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0731 23:27:04.987587       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:27:05.462538       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0731 23:27:05.772437       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:27:06.305459       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0731 23:27:06.318755       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0731 23:27:06.376898       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0731 23:27:06.421710       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:27:06.434041       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:27:06.916786       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0731 23:27:17.677886       1 controller.go:611] quota admission added evaluator for: endpoints
	I0731 23:27:17.746461       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [872fc2b3ffe7257927c3fc4d0afe4d897892ff6baccd6b0dc5b83b42deb6b4c9] <==
	I0731 23:27:17.659994       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0731 23:27:17.663871       1 shared_informer.go:262] Caches are synced for ephemeral
	I0731 23:27:17.663928       1 shared_informer.go:262] Caches are synced for daemon sets
	I0731 23:27:17.667421       1 shared_informer.go:262] Caches are synced for expand
	I0731 23:27:17.667617       1 shared_informer.go:262] Caches are synced for endpoint
	I0731 23:27:17.670662       1 shared_informer.go:262] Caches are synced for job
	I0731 23:27:17.671123       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0731 23:27:17.678244       1 shared_informer.go:262] Caches are synced for taint
	I0731 23:27:17.678524       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0731 23:27:17.678650       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-931367. Assuming now as a timestamp.
	I0731 23:27:17.678693       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0731 23:27:17.679093       1 event.go:294] "Event occurred" object="test-preload-931367" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-931367 event: Registered Node test-preload-931367 in Controller"
	I0731 23:27:17.679099       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0731 23:27:17.698857       1 shared_informer.go:262] Caches are synced for disruption
	I0731 23:27:17.698929       1 disruption.go:371] Sending events to api server.
	I0731 23:27:17.701430       1 shared_informer.go:262] Caches are synced for stateful set
	I0731 23:27:17.719070       1 shared_informer.go:262] Caches are synced for deployment
	I0731 23:27:17.757829       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 23:27:17.759345       1 shared_informer.go:262] Caches are synced for resource quota
	I0731 23:27:17.775562       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0731 23:27:17.823519       1 shared_informer.go:262] Caches are synced for crt configmap
	I0731 23:27:17.884901       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0731 23:27:18.245264       1 shared_informer.go:262] Caches are synced for garbage collector
	I0731 23:27:18.245333       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0731 23:27:18.286856       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [293b9e8f1963fca2261f49f02b19ce2581b0c798f4451827f6d7456e82bf023c] <==
	I0731 23:27:06.837673       1 node.go:163] Successfully retrieved node IP: 192.168.39.221
	I0731 23:27:06.837795       1 server_others.go:138] "Detected node IP" address="192.168.39.221"
	I0731 23:27:06.837826       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0731 23:27:06.908613       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0731 23:27:06.908657       1 server_others.go:206] "Using iptables Proxier"
	I0731 23:27:06.908686       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0731 23:27:06.909465       1 server.go:661] "Version info" version="v1.24.4"
	I0731 23:27:06.909481       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:27:06.910963       1 config.go:317] "Starting service config controller"
	I0731 23:27:06.911150       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0731 23:27:06.911207       1 config.go:226] "Starting endpoint slice config controller"
	I0731 23:27:06.911213       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0731 23:27:06.913378       1 config.go:444] "Starting node config controller"
	I0731 23:27:06.913472       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0731 23:27:07.011435       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0731 23:27:07.011518       1 shared_informer.go:262] Caches are synced for service config
	I0731 23:27:07.013589       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5a44e3343874cbcd2346936faefe8d1ff1054b69f73f52c1cf827707a1a9972e] <==
	W0731 23:27:04.893085       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:27:04.902747       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:27:04.893412       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:27:04.902872       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:27:04.893579       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0731 23:27:04.902940       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0731 23:27:04.894089       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:27:04.902987       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:27:04.894258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:27:04.903033       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:27:04.894357       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:27:04.903079       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:27:04.894441       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 23:27:04.903137       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 23:27:04.894527       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:27:04.903250       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 23:27:04.894808       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 23:27:04.895766       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:27:04.895852       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:27:04.895947       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:27:04.903352       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 23:27:04.903426       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:27:04.903464       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:27:04.903518       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0731 23:27:06.275866       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.670453    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83438a43-2794-47a3-b7e4-561184a83d75-lib-modules\") pod \"kube-proxy-b798z\" (UID: \"83438a43-2794-47a3-b7e4-561184a83d75\") " pod="kube-system/kube-proxy-b798z"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.670636    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume\") pod \"coredns-6d4b75cb6d-l8j76\" (UID: \"3c1b5571-29ba-481b-86b0-71867be2cfaf\") " pod="kube-system/coredns-6d4b75cb6d-l8j76"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.670661    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkt5f\" (UniqueName: \"kubernetes.io/projected/3c1b5571-29ba-481b-86b0-71867be2cfaf-kube-api-access-mkt5f\") pod \"coredns-6d4b75cb6d-l8j76\" (UID: \"3c1b5571-29ba-481b-86b0-71867be2cfaf\") " pod="kube-system/coredns-6d4b75cb6d-l8j76"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.670803    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7eed4d52-536c-45ba-b906-d5dc6e591454-tmp\") pod \"storage-provisioner\" (UID: \"7eed4d52-536c-45ba-b906-d5dc6e591454\") " pod="kube-system/storage-provisioner"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.670920    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf6r8\" (UniqueName: \"kubernetes.io/projected/7eed4d52-536c-45ba-b906-d5dc6e591454-kube-api-access-qf6r8\") pod \"storage-provisioner\" (UID: \"7eed4d52-536c-45ba-b906-d5dc6e591454\") " pod="kube-system/storage-provisioner"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.670946    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83438a43-2794-47a3-b7e4-561184a83d75-kube-proxy\") pod \"kube-proxy-b798z\" (UID: \"83438a43-2794-47a3-b7e4-561184a83d75\") " pod="kube-system/kube-proxy-b798z"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.671009    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83438a43-2794-47a3-b7e4-561184a83d75-xtables-lock\") pod \"kube-proxy-b798z\" (UID: \"83438a43-2794-47a3-b7e4-561184a83d75\") " pod="kube-system/kube-proxy-b798z"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.671107    1143 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4rmk\" (UniqueName: \"kubernetes.io/projected/83438a43-2794-47a3-b7e4-561184a83d75-kube-api-access-m4rmk\") pod \"kube-proxy-b798z\" (UID: \"83438a43-2794-47a3-b7e4-561184a83d75\") " pod="kube-system/kube-proxy-b798z"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: I0731 23:27:05.671235    1143 reconciler.go:159] "Reconciler: start to sync state"
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: E0731 23:27:05.776672    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 23:27:05 test-preload-931367 kubelet[1143]: E0731 23:27:05.776905    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume podName:3c1b5571-29ba-481b-86b0-71867be2cfaf nodeName:}" failed. No retries permitted until 2024-07-31 23:27:06.276873585 +0000 UTC m=+6.815158564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume") pod "coredns-6d4b75cb6d-l8j76" (UID: "3c1b5571-29ba-481b-86b0-71867be2cfaf") : object "kube-system"/"coredns" not registered
	Jul 31 23:27:06 test-preload-931367 kubelet[1143]: E0731 23:27:06.280417    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 23:27:06 test-preload-931367 kubelet[1143]: E0731 23:27:06.280501    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume podName:3c1b5571-29ba-481b-86b0-71867be2cfaf nodeName:}" failed. No retries permitted until 2024-07-31 23:27:07.280486291 +0000 UTC m=+7.818771268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume") pod "coredns-6d4b75cb6d-l8j76" (UID: "3c1b5571-29ba-481b-86b0-71867be2cfaf") : object "kube-system"/"coredns" not registered
	Jul 31 23:27:06 test-preload-931367 kubelet[1143]: I0731 23:27:06.734847    1143 scope.go:110] "RemoveContainer" containerID="72c71dbaaeb0b061e6052556017b7865649ac8ec8a2bd9db5da5f7e5f2aec2d0"
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: E0731 23:27:07.288578    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: E0731 23:27:07.288699    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume podName:3c1b5571-29ba-481b-86b0-71867be2cfaf nodeName:}" failed. No retries permitted until 2024-07-31 23:27:09.288682252 +0000 UTC m=+9.826967218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume") pod "coredns-6d4b75cb6d-l8j76" (UID: "3c1b5571-29ba-481b-86b0-71867be2cfaf") : object "kube-system"/"coredns" not registered
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: E0731 23:27:07.695651    1143 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-l8j76" podUID=3c1b5571-29ba-481b-86b0-71867be2cfaf
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: I0731 23:27:07.702865    1143 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=85124a02-807e-4e2e-be4c-d3863f54060d path="/var/lib/kubelet/pods/85124a02-807e-4e2e-be4c-d3863f54060d/volumes"
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: I0731 23:27:07.741766    1143 scope.go:110] "RemoveContainer" containerID="3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8"
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: E0731 23:27:07.741946    1143 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7eed4d52-536c-45ba-b906-d5dc6e591454)\"" pod="kube-system/storage-provisioner" podUID=7eed4d52-536c-45ba-b906-d5dc6e591454
	Jul 31 23:27:07 test-preload-931367 kubelet[1143]: I0731 23:27:07.742014    1143 scope.go:110] "RemoveContainer" containerID="72c71dbaaeb0b061e6052556017b7865649ac8ec8a2bd9db5da5f7e5f2aec2d0"
	Jul 31 23:27:08 test-preload-931367 kubelet[1143]: I0731 23:27:08.747671    1143 scope.go:110] "RemoveContainer" containerID="3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8"
	Jul 31 23:27:08 test-preload-931367 kubelet[1143]: E0731 23:27:08.748472    1143 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7eed4d52-536c-45ba-b906-d5dc6e591454)\"" pod="kube-system/storage-provisioner" podUID=7eed4d52-536c-45ba-b906-d5dc6e591454
	Jul 31 23:27:09 test-preload-931367 kubelet[1143]: E0731 23:27:09.313244    1143 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 31 23:27:09 test-preload-931367 kubelet[1143]: E0731 23:27:09.313357    1143 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume podName:3c1b5571-29ba-481b-86b0-71867be2cfaf nodeName:}" failed. No retries permitted until 2024-07-31 23:27:13.313338932 +0000 UTC m=+13.851623907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3c1b5571-29ba-481b-86b0-71867be2cfaf-config-volume") pod "coredns-6d4b75cb6d-l8j76" (UID: "3c1b5571-29ba-481b-86b0-71867be2cfaf") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [3f1b4aa8461323fd60e0ea3c6981c662646e0583b816c34106eeaf871deefce8] <==
	I0731 23:27:06.870246       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 23:27:06.873056       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-931367 -n test-preload-931367
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-931367 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-931367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-931367
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-931367: (1.210361908s)
--- FAIL: TestPreload (173.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m28.853019311s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-351764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-351764" primary control-plane node in "kubernetes-upgrade-351764" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:29:15.630252 1218502 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:29:15.630990 1218502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:29:15.631009 1218502 out.go:304] Setting ErrFile to fd 2...
	I0731 23:29:15.631014 1218502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:29:15.631218 1218502 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:29:15.631889 1218502 out.go:298] Setting JSON to false
	I0731 23:29:15.633087 1218502 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":25907,"bootTime":1722442649,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:29:15.633162 1218502 start.go:139] virtualization: kvm guest
	I0731 23:29:15.634808 1218502 out.go:177] * [kubernetes-upgrade-351764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:29:15.636022 1218502 notify.go:220] Checking for updates...
	I0731 23:29:15.637000 1218502 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:29:15.639431 1218502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:29:15.641955 1218502 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:29:15.643420 1218502 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:29:15.645107 1218502 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:29:15.646454 1218502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:29:15.647824 1218502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:29:15.693891 1218502 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 23:29:15.695701 1218502 start.go:297] selected driver: kvm2
	I0731 23:29:15.695717 1218502 start.go:901] validating driver "kvm2" against <nil>
	I0731 23:29:15.695754 1218502 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:29:15.696950 1218502 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:29:15.697048 1218502 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:29:15.716600 1218502 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:29:15.716675 1218502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 23:29:15.716933 1218502 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 23:29:15.716969 1218502 cni.go:84] Creating CNI manager for ""
	I0731 23:29:15.716986 1218502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:29:15.717005 1218502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 23:29:15.717090 1218502 start.go:340] cluster config:
	{Name:kubernetes-upgrade-351764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-351764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:29:15.717214 1218502 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:29:15.718917 1218502 out.go:177] * Starting "kubernetes-upgrade-351764" primary control-plane node in "kubernetes-upgrade-351764" cluster
	I0731 23:29:15.720517 1218502 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 23:29:15.720588 1218502 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 23:29:15.720601 1218502 cache.go:56] Caching tarball of preloaded images
	I0731 23:29:15.720693 1218502 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 23:29:15.720704 1218502 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0731 23:29:15.721146 1218502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/config.json ...
	I0731 23:29:15.721176 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/config.json: {Name:mk8ffed4636e94a786e92b29d183a1956353f7a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:15.721367 1218502 start.go:360] acquireMachinesLock for kubernetes-upgrade-351764: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:29:15.721411 1218502 start.go:364] duration metric: took 27.116µs to acquireMachinesLock for "kubernetes-upgrade-351764"
	I0731 23:29:15.721435 1218502 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-351764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-351764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:29:15.721516 1218502 start.go:125] createHost starting for "" (driver="kvm2")
	I0731 23:29:15.723398 1218502 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 23:29:15.723590 1218502 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:29:15.723628 1218502 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:29:15.740434 1218502 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42043
	I0731 23:29:15.740979 1218502 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:29:15.741732 1218502 main.go:141] libmachine: Using API Version  1
	I0731 23:29:15.741759 1218502 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:29:15.742188 1218502 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:29:15.742438 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetMachineName
	I0731 23:29:15.742624 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:15.742785 1218502 start.go:159] libmachine.API.Create for "kubernetes-upgrade-351764" (driver="kvm2")
	I0731 23:29:15.742812 1218502 client.go:168] LocalClient.Create starting
	I0731 23:29:15.742850 1218502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 23:29:15.742892 1218502 main.go:141] libmachine: Decoding PEM data...
	I0731 23:29:15.742911 1218502 main.go:141] libmachine: Parsing certificate...
	I0731 23:29:15.742977 1218502 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 23:29:15.743006 1218502 main.go:141] libmachine: Decoding PEM data...
	I0731 23:29:15.743022 1218502 main.go:141] libmachine: Parsing certificate...
	I0731 23:29:15.743044 1218502 main.go:141] libmachine: Running pre-create checks...
	I0731 23:29:15.743058 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .PreCreateCheck
	I0731 23:29:15.743411 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetConfigRaw
	I0731 23:29:15.743890 1218502 main.go:141] libmachine: Creating machine...
	I0731 23:29:15.743908 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .Create
	I0731 23:29:15.744047 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Creating KVM machine...
	I0731 23:29:15.745390 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found existing default KVM network
	I0731 23:29:15.746188 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:15.746029 1218555 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c00}
	I0731 23:29:15.746219 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | created network xml: 
	I0731 23:29:15.746234 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | <network>
	I0731 23:29:15.746243 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |   <name>mk-kubernetes-upgrade-351764</name>
	I0731 23:29:15.746259 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |   <dns enable='no'/>
	I0731 23:29:15.746270 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |   
	I0731 23:29:15.746283 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0731 23:29:15.746297 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |     <dhcp>
	I0731 23:29:15.746310 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0731 23:29:15.746321 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |     </dhcp>
	I0731 23:29:15.746332 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |   </ip>
	I0731 23:29:15.746342 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG |   
	I0731 23:29:15.746350 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | </network>
	I0731 23:29:15.746360 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | 
	I0731 23:29:15.751856 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | trying to create private KVM network mk-kubernetes-upgrade-351764 192.168.39.0/24...
	I0731 23:29:15.833357 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | private KVM network mk-kubernetes-upgrade-351764 192.168.39.0/24 created
	I0731 23:29:15.833390 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764 ...
	I0731 23:29:15.833422 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:15.833300 1218555 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:29:15.833433 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 23:29:15.833612 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 23:29:16.119357 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:16.119232 1218555 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa...
	I0731 23:29:16.285039 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:16.284898 1218555 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/kubernetes-upgrade-351764.rawdisk...
	I0731 23:29:16.285063 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Writing magic tar header
	I0731 23:29:16.285082 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Writing SSH key tar header
	I0731 23:29:16.285090 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:16.285063 1218555 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764 ...
	I0731 23:29:16.285163 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764
	I0731 23:29:16.285179 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 23:29:16.285188 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:29:16.285203 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764 (perms=drwx------)
	I0731 23:29:16.285213 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 23:29:16.285230 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 23:29:16.285239 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home/jenkins
	I0731 23:29:16.285246 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Checking permissions on dir: /home
	I0731 23:29:16.285252 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Skipping /home - not owner
	I0731 23:29:16.285263 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 23:29:16.285273 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 23:29:16.285281 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 23:29:16.285306 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 23:29:16.285322 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 23:29:16.285331 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Creating domain...
	I0731 23:29:16.286462 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) define libvirt domain using xml: 
	I0731 23:29:16.286497 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) <domain type='kvm'>
	I0731 23:29:16.286507 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <name>kubernetes-upgrade-351764</name>
	I0731 23:29:16.286513 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <memory unit='MiB'>2200</memory>
	I0731 23:29:16.286522 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <vcpu>2</vcpu>
	I0731 23:29:16.286529 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <features>
	I0731 23:29:16.286537 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <acpi/>
	I0731 23:29:16.286541 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <apic/>
	I0731 23:29:16.286549 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <pae/>
	I0731 23:29:16.286567 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     
	I0731 23:29:16.286579 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   </features>
	I0731 23:29:16.286617 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <cpu mode='host-passthrough'>
	I0731 23:29:16.286632 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   
	I0731 23:29:16.286638 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   </cpu>
	I0731 23:29:16.286647 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <os>
	I0731 23:29:16.286704 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <type>hvm</type>
	I0731 23:29:16.286740 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <boot dev='cdrom'/>
	I0731 23:29:16.286753 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <boot dev='hd'/>
	I0731 23:29:16.286764 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <bootmenu enable='no'/>
	I0731 23:29:16.286776 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   </os>
	I0731 23:29:16.286787 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   <devices>
	I0731 23:29:16.286804 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <disk type='file' device='cdrom'>
	I0731 23:29:16.286829 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/boot2docker.iso'/>
	I0731 23:29:16.286843 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <target dev='hdc' bus='scsi'/>
	I0731 23:29:16.286852 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <readonly/>
	I0731 23:29:16.286864 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </disk>
	I0731 23:29:16.286876 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <disk type='file' device='disk'>
	I0731 23:29:16.286902 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 23:29:16.286926 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/kubernetes-upgrade-351764.rawdisk'/>
	I0731 23:29:16.286936 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <target dev='hda' bus='virtio'/>
	I0731 23:29:16.286943 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </disk>
	I0731 23:29:16.286950 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <interface type='network'>
	I0731 23:29:16.286961 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <source network='mk-kubernetes-upgrade-351764'/>
	I0731 23:29:16.286975 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <model type='virtio'/>
	I0731 23:29:16.286989 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </interface>
	I0731 23:29:16.287008 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <interface type='network'>
	I0731 23:29:16.287026 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <source network='default'/>
	I0731 23:29:16.287038 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <model type='virtio'/>
	I0731 23:29:16.287048 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </interface>
	I0731 23:29:16.287061 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <serial type='pty'>
	I0731 23:29:16.287072 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <target port='0'/>
	I0731 23:29:16.287082 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </serial>
	I0731 23:29:16.287097 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <console type='pty'>
	I0731 23:29:16.287110 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <target type='serial' port='0'/>
	I0731 23:29:16.287121 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </console>
	I0731 23:29:16.287133 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     <rng model='virtio'>
	I0731 23:29:16.287143 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)       <backend model='random'>/dev/random</backend>
	I0731 23:29:16.287155 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     </rng>
	I0731 23:29:16.287170 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     
	I0731 23:29:16.287182 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)     
	I0731 23:29:16.287193 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764)   </devices>
	I0731 23:29:16.287205 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) </domain>
	I0731 23:29:16.287215 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) 
	I0731 23:29:16.291769 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:98:3e:1c in network default
	I0731 23:29:16.292413 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Ensuring networks are active...
	I0731 23:29:16.292438 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:16.293179 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Ensuring network default is active
	I0731 23:29:16.293506 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Ensuring network mk-kubernetes-upgrade-351764 is active
	I0731 23:29:16.293999 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Getting domain xml...
	I0731 23:29:16.294858 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Creating domain...
	I0731 23:29:17.614663 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Waiting to get IP...
	I0731 23:29:17.616851 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:17.617501 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:17.617527 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:17.617451 1218555 retry.go:31] will retry after 266.929416ms: waiting for machine to come up
	I0731 23:29:17.886218 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:17.886672 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:17.886702 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:17.886631 1218555 retry.go:31] will retry after 387.506014ms: waiting for machine to come up
	I0731 23:29:18.276427 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:18.276864 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:18.276897 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:18.276829 1218555 retry.go:31] will retry after 334.480269ms: waiting for machine to come up
	I0731 23:29:18.613315 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:18.613844 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:18.613872 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:18.613805 1218555 retry.go:31] will retry after 400.580446ms: waiting for machine to come up
	I0731 23:29:19.016650 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:19.017093 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:19.017124 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:19.017068 1218555 retry.go:31] will retry after 499.862971ms: waiting for machine to come up
	I0731 23:29:19.518848 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:19.519277 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:19.519308 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:19.519230 1218555 retry.go:31] will retry after 719.795879ms: waiting for machine to come up
	I0731 23:29:20.240287 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:20.240683 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:20.240730 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:20.240631 1218555 retry.go:31] will retry after 890.172177ms: waiting for machine to come up
	I0731 23:29:21.132364 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:21.132797 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:21.132830 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:21.132737 1218555 retry.go:31] will retry after 969.529818ms: waiting for machine to come up
	I0731 23:29:22.104217 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:22.104883 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:22.105020 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:22.104802 1218555 retry.go:31] will retry after 1.489192663s: waiting for machine to come up
	I0731 23:29:23.595557 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:23.596084 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:23.596139 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:23.596015 1218555 retry.go:31] will retry after 1.671542313s: waiting for machine to come up
	I0731 23:29:25.269424 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:25.269982 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:25.270010 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:25.269942 1218555 retry.go:31] will retry after 2.085980513s: waiting for machine to come up
	I0731 23:29:27.358541 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:27.358964 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:27.359008 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:27.358923 1218555 retry.go:31] will retry after 3.106549685s: waiting for machine to come up
	I0731 23:29:30.469285 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:30.469766 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:30.469796 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:30.469717 1218555 retry.go:31] will retry after 3.688091888s: waiting for machine to come up
	I0731 23:29:34.159044 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:34.159496 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find current IP address of domain kubernetes-upgrade-351764 in network mk-kubernetes-upgrade-351764
	I0731 23:29:34.159527 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | I0731 23:29:34.159418 1218555 retry.go:31] will retry after 3.603103948s: waiting for machine to come up
	I0731 23:29:37.765349 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.765978 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Found IP for machine: 192.168.39.228
	I0731 23:29:37.766000 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Reserving static IP address...
	I0731 23:29:37.766012 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has current primary IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.766496 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-351764", mac: "52:54:00:52:b4:6e", ip: "192.168.39.228"} in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.856979 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Getting to WaitForSSH function...
	I0731 23:29:37.857011 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Reserved static IP address: 192.168.39.228
	I0731 23:29:37.857025 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Waiting for SSH to be available...
	I0731 23:29:37.860242 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.860677 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:minikube Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:37.860715 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.860909 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Using SSH client type: external
	I0731 23:29:37.860939 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Using SSH private key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa (-rw-------)
	I0731 23:29:37.860991 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0731 23:29:37.861009 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | About to run SSH command:
	I0731 23:29:37.861023 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | exit 0
	I0731 23:29:37.984210 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | SSH cmd err, output: <nil>: 
	I0731 23:29:37.984548 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) KVM machine creation complete!
	I0731 23:29:37.984882 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetConfigRaw
	I0731 23:29:37.985499 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:37.985664 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:37.985847 1218502 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0731 23:29:37.985864 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetState
	I0731 23:29:37.987099 1218502 main.go:141] libmachine: Detecting operating system of created instance...
	I0731 23:29:37.987133 1218502 main.go:141] libmachine: Waiting for SSH to be available...
	I0731 23:29:37.987142 1218502 main.go:141] libmachine: Getting to WaitForSSH function...
	I0731 23:29:37.987151 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:37.989378 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.989658 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:37.989684 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:37.989894 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:37.990129 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:37.990282 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:37.990452 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:37.990622 1218502 main.go:141] libmachine: Using SSH client type: native
	I0731 23:29:37.990879 1218502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0731 23:29:37.990898 1218502 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0731 23:29:38.091473 1218502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:29:38.091502 1218502 main.go:141] libmachine: Detecting the provisioner...
	I0731 23:29:38.091513 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:38.094226 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.094494 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.094520 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.094748 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:38.095008 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.095159 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.095318 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:38.095449 1218502 main.go:141] libmachine: Using SSH client type: native
	I0731 23:29:38.095636 1218502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0731 23:29:38.095647 1218502 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0731 23:29:38.196821 1218502 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0731 23:29:38.196912 1218502 main.go:141] libmachine: found compatible host: buildroot
	I0731 23:29:38.196922 1218502 main.go:141] libmachine: Provisioning with buildroot...
	I0731 23:29:38.196930 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetMachineName
	I0731 23:29:38.197222 1218502 buildroot.go:166] provisioning hostname "kubernetes-upgrade-351764"
	I0731 23:29:38.197263 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetMachineName
	I0731 23:29:38.197474 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:38.200325 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.200688 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.200738 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.200863 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:38.201073 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.201222 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.201341 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:38.201521 1218502 main.go:141] libmachine: Using SSH client type: native
	I0731 23:29:38.201714 1218502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0731 23:29:38.201726 1218502 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-351764 && echo "kubernetes-upgrade-351764" | sudo tee /etc/hostname
	I0731 23:29:38.318727 1218502 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-351764
	
	I0731 23:29:38.318760 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:38.321896 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.322278 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.322308 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.322513 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:38.322766 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.322924 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.323064 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:38.323235 1218502 main.go:141] libmachine: Using SSH client type: native
	I0731 23:29:38.323411 1218502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0731 23:29:38.323426 1218502 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-351764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-351764/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-351764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:29:38.432949 1218502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:29:38.432987 1218502 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:29:38.433044 1218502 buildroot.go:174] setting up certificates
	I0731 23:29:38.433063 1218502 provision.go:84] configureAuth start
	I0731 23:29:38.433082 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetMachineName
	I0731 23:29:38.433419 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetIP
	I0731 23:29:38.436244 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.436638 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.436667 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.436817 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:38.439220 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.439570 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.439600 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.439779 1218502 provision.go:143] copyHostCerts
	I0731 23:29:38.439865 1218502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:29:38.439875 1218502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:29:38.439941 1218502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:29:38.440052 1218502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:29:38.440067 1218502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:29:38.440116 1218502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:29:38.440257 1218502 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:29:38.440268 1218502 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:29:38.440297 1218502 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:29:38.440363 1218502 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-351764 san=[127.0.0.1 192.168.39.228 kubernetes-upgrade-351764 localhost minikube]
	I0731 23:29:38.627329 1218502 provision.go:177] copyRemoteCerts
	I0731 23:29:38.627397 1218502 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:29:38.627424 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:38.630420 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.630838 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.630865 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.631093 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:38.631349 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.631513 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:38.631892 1218502 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa Username:docker}
	I0731 23:29:38.714574 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:29:38.739240 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0731 23:29:38.764187 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0731 23:29:38.789429 1218502 provision.go:87] duration metric: took 356.346757ms to configureAuth
	I0731 23:29:38.789461 1218502 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:29:38.789622 1218502 config.go:182] Loaded profile config "kubernetes-upgrade-351764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 23:29:38.789703 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:38.792631 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.792961 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:38.792993 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:38.793222 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:38.793446 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.793628 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:38.793762 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:38.793933 1218502 main.go:141] libmachine: Using SSH client type: native
	I0731 23:29:38.794109 1218502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0731 23:29:38.794126 1218502 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:29:39.056843 1218502 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:29:39.056875 1218502 main.go:141] libmachine: Checking connection to Docker...
	I0731 23:29:39.056888 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetURL
	I0731 23:29:39.058168 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Using libvirt version 6000000
	I0731 23:29:39.060212 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.060654 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.060689 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.060825 1218502 main.go:141] libmachine: Docker is up and running!
	I0731 23:29:39.060843 1218502 main.go:141] libmachine: Reticulating splines...
	I0731 23:29:39.060853 1218502 client.go:171] duration metric: took 23.318029878s to LocalClient.Create
	I0731 23:29:39.060880 1218502 start.go:167] duration metric: took 23.318095851s to libmachine.API.Create "kubernetes-upgrade-351764"
	I0731 23:29:39.060893 1218502 start.go:293] postStartSetup for "kubernetes-upgrade-351764" (driver="kvm2")
	I0731 23:29:39.060905 1218502 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:29:39.060926 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:39.061224 1218502 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:29:39.061261 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:39.063780 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.064228 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.064267 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.064516 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:39.064752 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:39.064928 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:39.065047 1218502 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa Username:docker}
	I0731 23:29:39.147131 1218502 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:29:39.151480 1218502 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:29:39.151517 1218502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:29:39.151589 1218502 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:29:39.151662 1218502 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:29:39.151769 1218502 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:29:39.161942 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:29:39.186710 1218502 start.go:296] duration metric: took 125.79698ms for postStartSetup
	I0731 23:29:39.186779 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetConfigRaw
	I0731 23:29:39.187434 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetIP
	I0731 23:29:39.190257 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.190612 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.190642 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.191116 1218502 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/config.json ...
	I0731 23:29:39.191359 1218502 start.go:128] duration metric: took 23.469830344s to createHost
	I0731 23:29:39.191392 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:39.193791 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.194086 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.194112 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.194285 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:39.194526 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:39.194658 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:39.194809 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:39.194945 1218502 main.go:141] libmachine: Using SSH client type: native
	I0731 23:29:39.195167 1218502 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.228 22 <nil> <nil>}
	I0731 23:29:39.195187 1218502 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 23:29:39.296866 1218502 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468579.279035782
	
	I0731 23:29:39.296894 1218502 fix.go:216] guest clock: 1722468579.279035782
	I0731 23:29:39.296903 1218502 fix.go:229] Guest: 2024-07-31 23:29:39.279035782 +0000 UTC Remote: 2024-07-31 23:29:39.191373177 +0000 UTC m=+23.618152933 (delta=87.662605ms)
	I0731 23:29:39.296924 1218502 fix.go:200] guest clock delta is within tolerance: 87.662605ms
	I0731 23:29:39.296929 1218502 start.go:83] releasing machines lock for "kubernetes-upgrade-351764", held for 23.575509922s
	I0731 23:29:39.296959 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:39.297302 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetIP
	I0731 23:29:39.299992 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.300371 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.300411 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.300587 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:39.301204 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:39.301426 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:29:39.301547 1218502 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:29:39.301599 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:39.301697 1218502 ssh_runner.go:195] Run: cat /version.json
	I0731 23:29:39.301728 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:29:39.304484 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.304847 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.304883 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.304910 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.305090 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:39.305287 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:39.305323 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:39.305348 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:39.305486 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:29:39.305539 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:39.305650 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:29:39.305780 1218502 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa Username:docker}
	I0731 23:29:39.305818 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:29:39.305946 1218502 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa Username:docker}
	I0731 23:29:39.407450 1218502 ssh_runner.go:195] Run: systemctl --version
	I0731 23:29:39.414249 1218502 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:29:39.579859 1218502 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 23:29:39.586807 1218502 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:29:39.586898 1218502 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:29:39.603804 1218502 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:29:39.603833 1218502 start.go:495] detecting cgroup driver to use...
	I0731 23:29:39.603918 1218502 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:29:39.623005 1218502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:29:39.638301 1218502 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:29:39.638360 1218502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:29:39.653285 1218502 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:29:39.667458 1218502 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:29:39.784825 1218502 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:29:39.950147 1218502 docker.go:233] disabling docker service ...
	I0731 23:29:39.950242 1218502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:29:39.964944 1218502 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:29:39.978712 1218502 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:29:40.098612 1218502 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:29:40.216555 1218502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:29:40.231108 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:29:40.250379 1218502 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0731 23:29:40.250459 1218502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:29:40.261513 1218502 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:29:40.261592 1218502 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:29:40.272642 1218502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:29:40.284156 1218502 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:29:40.295488 1218502 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:29:40.306813 1218502 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:29:40.317181 1218502 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0731 23:29:40.317248 1218502 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0731 23:29:40.330187 1218502 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:29:40.340670 1218502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:29:40.460960 1218502 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:29:40.592856 1218502 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:29:40.592919 1218502 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:29:40.598854 1218502 start.go:563] Will wait 60s for crictl version
	I0731 23:29:40.598930 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:40.604653 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:29:40.649755 1218502 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 23:29:40.649852 1218502 ssh_runner.go:195] Run: crio --version
	I0731 23:29:40.678115 1218502 ssh_runner.go:195] Run: crio --version
	I0731 23:29:40.709683 1218502 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0731 23:29:40.710931 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetIP
	I0731 23:29:40.714231 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:40.714694 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:29:30 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:29:40.714731 1218502 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:29:40.714946 1218502 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0731 23:29:40.719153 1218502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:29:40.732345 1218502 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-351764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-351764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:29:40.732468 1218502 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 23:29:40.732516 1218502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:29:40.765345 1218502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 23:29:40.765413 1218502 ssh_runner.go:195] Run: which lz4
	I0731 23:29:40.769880 1218502 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0731 23:29:40.774528 1218502 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:29:40.774572 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0731 23:29:42.491752 1218502 crio.go:462] duration metric: took 1.72191877s to copy over tarball
	I0731 23:29:42.491840 1218502 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 23:29:45.170859 1218502 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.678989644s)
	I0731 23:29:45.170895 1218502 crio.go:469] duration metric: took 2.679103663s to extract the tarball
	I0731 23:29:45.170904 1218502 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 23:29:45.221159 1218502 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:29:45.265475 1218502 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0731 23:29:45.265506 1218502 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 23:29:45.265588 1218502 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:29:45.265614 1218502 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.265629 1218502 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:45.265594 1218502 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.265688 1218502 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.265699 1218502 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.265658 1218502 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:45.265715 1218502 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0731 23:29:45.267236 1218502 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.267321 1218502 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.267342 1218502 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.267342 1218502 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0731 23:29:45.267342 1218502 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:45.267343 1218502 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:45.267397 1218502 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:29:45.267342 1218502 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.415981 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.420131 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.427453 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:45.431393 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.447254 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.453202 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0731 23:29:45.471685 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:45.522681 1218502 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0731 23:29:45.522722 1218502 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0731 23:29:45.522736 1218502 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.522764 1218502 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.522812 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.522814 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.554944 1218502 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0731 23:29:45.554998 1218502 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:45.555050 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.590425 1218502 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0731 23:29:45.590480 1218502 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.590497 1218502 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0731 23:29:45.590529 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.590541 1218502 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.590593 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.606005 1218502 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0731 23:29:45.606057 1218502 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0731 23:29:45.606111 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.611267 1218502 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0731 23:29:45.611317 1218502 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:45.611325 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.611349 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.611376 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:45.611356 1218502 ssh_runner.go:195] Run: which crictl
	I0731 23:29:45.611413 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.611414 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.613388 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 23:29:45.766929 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.766969 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.766977 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.767049 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.767083 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:45.767047 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:45.767106 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 23:29:45.889320 1218502 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:29:45.938754 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0731 23:29:45.938803 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0731 23:29:45.938771 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:45.938997 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0731 23:29:45.939048 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0731 23:29:45.939085 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0731 23:29:45.939003 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0731 23:29:46.192614 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0731 23:29:46.192674 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0731 23:29:46.192775 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0731 23:29:46.192783 1218502 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0731 23:29:46.192826 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0731 23:29:46.192884 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0731 23:29:46.192912 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0731 23:29:46.229744 1218502 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0731 23:29:46.229823 1218502 cache_images.go:92] duration metric: took 964.303246ms to LoadCachedImages
	W0731 23:29:46.229911 1218502 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0731 23:29:46.229934 1218502 kubeadm.go:934] updating node { 192.168.39.228 8443 v1.20.0 crio true true} ...
	I0731 23:29:46.230116 1218502 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-351764 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-351764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:29:46.230204 1218502 ssh_runner.go:195] Run: crio config
	I0731 23:29:46.276358 1218502 cni.go:84] Creating CNI manager for ""
	I0731 23:29:46.276387 1218502 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:29:46.276415 1218502 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:29:46.276436 1218502 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.228 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-351764 NodeName:kubernetes-upgrade-351764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0731 23:29:46.276598 1218502 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-351764"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:29:46.276668 1218502 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0731 23:29:46.286870 1218502 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:29:46.286960 1218502 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:29:46.296881 1218502 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0731 23:29:46.314724 1218502 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:29:46.332302 1218502 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0731 23:29:46.350102 1218502 ssh_runner.go:195] Run: grep 192.168.39.228	control-plane.minikube.internal$ /etc/hosts
	I0731 23:29:46.354223 1218502 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:29:46.367104 1218502 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:29:46.498601 1218502 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:29:46.515428 1218502 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764 for IP: 192.168.39.228
	I0731 23:29:46.515479 1218502 certs.go:194] generating shared ca certs ...
	I0731 23:29:46.515505 1218502 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.515705 1218502 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 23:29:46.515768 1218502 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 23:29:46.515784 1218502 certs.go:256] generating profile certs ...
	I0731 23:29:46.515877 1218502 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/client.key
	I0731 23:29:46.515909 1218502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/client.crt with IP's: []
	I0731 23:29:46.623870 1218502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/client.crt ...
	I0731 23:29:46.623920 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/client.crt: {Name:mkcdafd6eead681490e70d68320645436b3e1a41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.624178 1218502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/client.key ...
	I0731 23:29:46.624203 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/client.key: {Name:mk896411c499a84d1a6f4e120abe65ca5b22c60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.624345 1218502 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.key.f339dcab
	I0731 23:29:46.624366 1218502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.crt.f339dcab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.228]
	I0731 23:29:46.767228 1218502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.crt.f339dcab ...
	I0731 23:29:46.767264 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.crt.f339dcab: {Name:mk00c3e1cbdb0062da0785a97c3c6388813df57a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.767458 1218502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.key.f339dcab ...
	I0731 23:29:46.767481 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.key.f339dcab: {Name:mkc8492a49179b609e6b0a9e323873be80b5f0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.767594 1218502 certs.go:381] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.crt.f339dcab -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.crt
	I0731 23:29:46.767686 1218502 certs.go:385] copying /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.key.f339dcab -> /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.key
	I0731 23:29:46.767775 1218502 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.key
	I0731 23:29:46.767797 1218502 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.crt with IP's: []
	I0731 23:29:46.934197 1218502 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.crt ...
	I0731 23:29:46.934237 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.crt: {Name:mk0d542f4fea5ec1dd01fdb9a52b7465cb6247d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.934415 1218502 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.key ...
	I0731 23:29:46.934430 1218502 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.key: {Name:mk9d74728504e0328e80384924e68c10e6e83a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:29:46.934605 1218502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 23:29:46.934648 1218502 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 23:29:46.934656 1218502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 23:29:46.934676 1218502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 23:29:46.934697 1218502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:29:46.934718 1218502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 23:29:46.934753 1218502 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:29:46.935394 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:29:46.962013 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:29:46.990072 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:29:47.016602 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 23:29:47.042977 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0731 23:29:47.068932 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0731 23:29:47.094506 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:29:47.121859 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/kubernetes-upgrade-351764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 23:29:47.147435 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:29:47.173581 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 23:29:47.199213 1218502 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 23:29:47.225347 1218502 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:29:47.243059 1218502 ssh_runner.go:195] Run: openssl version
	I0731 23:29:47.249208 1218502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 23:29:47.260800 1218502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 23:29:47.265725 1218502 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:29:47.265799 1218502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 23:29:47.271937 1218502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:29:47.283023 1218502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:29:47.294588 1218502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:29:47.299430 1218502 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:29:47.299511 1218502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:29:47.305542 1218502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:29:47.330915 1218502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 23:29:47.344784 1218502 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 23:29:47.350288 1218502 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:29:47.350377 1218502 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 23:29:47.357068 1218502 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 23:29:47.373367 1218502 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:29:47.378687 1218502 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0731 23:29:47.378759 1218502 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-351764 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-351764 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.228 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:29:47.378860 1218502 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:29:47.378920 1218502 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:29:47.418396 1218502 cri.go:89] found id: ""
	I0731 23:29:47.418485 1218502 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0731 23:29:47.433072 1218502 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0731 23:29:47.443149 1218502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:29:47.453155 1218502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:29:47.453178 1218502 kubeadm.go:157] found existing configuration files:
	
	I0731 23:29:47.453229 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:29:47.462623 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:29:47.462705 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:29:47.473058 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:29:47.482777 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:29:47.482886 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:29:47.492868 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:29:47.502676 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:29:47.502749 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:29:47.512705 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:29:47.522124 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:29:47.522196 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:29:47.532037 1218502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 23:29:47.652719 1218502 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 23:29:47.652859 1218502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 23:29:47.799103 1218502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 23:29:47.799268 1218502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 23:29:47.799433 1218502 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 23:29:47.971428 1218502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:29:47.980252 1218502 out.go:204]   - Generating certificates and keys ...
	I0731 23:29:47.980390 1218502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 23:29:47.980488 1218502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 23:29:48.350337 1218502 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0731 23:29:48.742492 1218502 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0731 23:29:48.934704 1218502 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0731 23:29:49.299056 1218502 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0731 23:29:49.520795 1218502 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0731 23:29:49.521075 1218502 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0731 23:29:49.657429 1218502 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0731 23:29:49.657661 1218502 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	I0731 23:29:49.783150 1218502 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0731 23:29:50.137850 1218502 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0731 23:29:50.328639 1218502 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0731 23:29:50.328899 1218502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:29:50.577976 1218502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:29:50.782610 1218502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:29:51.376333 1218502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:29:51.578396 1218502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:29:51.596016 1218502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:29:51.603707 1218502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:29:51.603904 1218502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 23:29:51.746559 1218502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:29:51.748233 1218502 out.go:204]   - Booting up control plane ...
	I0731 23:29:51.748360 1218502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:29:51.753064 1218502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:29:51.754570 1218502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:29:51.755227 1218502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:29:51.759662 1218502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 23:30:31.754496 1218502 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 23:30:31.754793 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:30:31.755053 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:30:36.755517 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:30:36.755832 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:30:46.755877 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:30:46.756065 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:31:06.757073 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:31:06.757269 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:31:46.759805 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:31:46.760115 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:31:46.760127 1218502 kubeadm.go:310] 
	I0731 23:31:46.760177 1218502 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 23:31:46.760229 1218502 kubeadm.go:310] 		timed out waiting for the condition
	I0731 23:31:46.760236 1218502 kubeadm.go:310] 
	I0731 23:31:46.760276 1218502 kubeadm.go:310] 	This error is likely caused by:
	I0731 23:31:46.760321 1218502 kubeadm.go:310] 		- The kubelet is not running
	I0731 23:31:46.760447 1218502 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 23:31:46.760455 1218502 kubeadm.go:310] 
	I0731 23:31:46.760550 1218502 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 23:31:46.760577 1218502 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 23:31:46.760603 1218502 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 23:31:46.760607 1218502 kubeadm.go:310] 
	I0731 23:31:46.760697 1218502 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 23:31:46.760767 1218502 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 23:31:46.760772 1218502 kubeadm.go:310] 
	I0731 23:31:46.760869 1218502 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 23:31:46.760968 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 23:31:46.761047 1218502 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 23:31:46.761106 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 23:31:46.761109 1218502 kubeadm.go:310] 
	I0731 23:31:46.761955 1218502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:31:46.762078 1218502 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 23:31:46.762159 1218502 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 23:31:46.762332 1218502 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 23:31:46.762394 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 23:31:47.361436 1218502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:31:47.377093 1218502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:31:47.391110 1218502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:31:47.391138 1218502 kubeadm.go:157] found existing configuration files:
	
	I0731 23:31:47.391204 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:31:47.403341 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:31:47.403456 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:31:47.416875 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:31:47.429337 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:31:47.429423 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:31:47.442697 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:31:47.455492 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:31:47.455588 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:31:47.468044 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:31:47.480083 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:31:47.480199 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:31:47.495474 1218502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 23:31:47.745355 1218502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:33:43.723481 1218502 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 23:33:43.723583 1218502 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0731 23:33:43.725065 1218502 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0731 23:33:43.725151 1218502 kubeadm.go:310] [preflight] Running pre-flight checks
	I0731 23:33:43.725258 1218502 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0731 23:33:43.725426 1218502 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0731 23:33:43.725591 1218502 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0731 23:33:43.725678 1218502 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0731 23:33:43.727460 1218502 out.go:204]   - Generating certificates and keys ...
	I0731 23:33:43.727567 1218502 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0731 23:33:43.727642 1218502 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0731 23:33:43.727774 1218502 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0731 23:33:43.727867 1218502 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0731 23:33:43.727975 1218502 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0731 23:33:43.728046 1218502 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0731 23:33:43.728157 1218502 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0731 23:33:43.728241 1218502 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0731 23:33:43.728317 1218502 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0731 23:33:43.728440 1218502 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0731 23:33:43.728500 1218502 kubeadm.go:310] [certs] Using the existing "sa" key
	I0731 23:33:43.728578 1218502 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0731 23:33:43.728663 1218502 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0731 23:33:43.728733 1218502 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0731 23:33:43.728823 1218502 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0731 23:33:43.728895 1218502 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0731 23:33:43.729056 1218502 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0731 23:33:43.729175 1218502 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0731 23:33:43.729234 1218502 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0731 23:33:43.729331 1218502 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0731 23:33:43.730685 1218502 out.go:204]   - Booting up control plane ...
	I0731 23:33:43.730785 1218502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0731 23:33:43.730855 1218502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0731 23:33:43.730943 1218502 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0731 23:33:43.731059 1218502 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0731 23:33:43.731231 1218502 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0731 23:33:43.731291 1218502 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0731 23:33:43.731357 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:33:43.731553 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:33:43.731653 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:33:43.731872 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:33:43.731956 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:33:43.732176 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:33:43.732264 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:33:43.732448 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:33:43.732521 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:33:43.732688 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:33:43.732697 1218502 kubeadm.go:310] 
	I0731 23:33:43.732741 1218502 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 23:33:43.732778 1218502 kubeadm.go:310] 		timed out waiting for the condition
	I0731 23:33:43.732792 1218502 kubeadm.go:310] 
	I0731 23:33:43.732834 1218502 kubeadm.go:310] 	This error is likely caused by:
	I0731 23:33:43.732863 1218502 kubeadm.go:310] 		- The kubelet is not running
	I0731 23:33:43.732961 1218502 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 23:33:43.732968 1218502 kubeadm.go:310] 
	I0731 23:33:43.733067 1218502 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 23:33:43.733110 1218502 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 23:33:43.733144 1218502 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 23:33:43.733150 1218502 kubeadm.go:310] 
	I0731 23:33:43.733242 1218502 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 23:33:43.733311 1218502 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 23:33:43.733322 1218502 kubeadm.go:310] 
	I0731 23:33:43.733421 1218502 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 23:33:43.733494 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 23:33:43.733578 1218502 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 23:33:43.733669 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 23:33:43.733689 1218502 kubeadm.go:310] 
	I0731 23:33:43.733754 1218502 kubeadm.go:394] duration metric: took 3m56.355004767s to StartCluster
	I0731 23:33:43.733821 1218502 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0731 23:33:43.733894 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0731 23:33:43.773284 1218502 cri.go:89] found id: ""
	I0731 23:33:43.773312 1218502 logs.go:276] 0 containers: []
	W0731 23:33:43.773319 1218502 logs.go:278] No container was found matching "kube-apiserver"
	I0731 23:33:43.773327 1218502 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0731 23:33:43.773399 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0731 23:33:43.808174 1218502 cri.go:89] found id: ""
	I0731 23:33:43.808216 1218502 logs.go:276] 0 containers: []
	W0731 23:33:43.808230 1218502 logs.go:278] No container was found matching "etcd"
	I0731 23:33:43.808249 1218502 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0731 23:33:43.808329 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0731 23:33:43.846701 1218502 cri.go:89] found id: ""
	I0731 23:33:43.846729 1218502 logs.go:276] 0 containers: []
	W0731 23:33:43.846745 1218502 logs.go:278] No container was found matching "coredns"
	I0731 23:33:43.846751 1218502 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0731 23:33:43.846808 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0731 23:33:43.890564 1218502 cri.go:89] found id: ""
	I0731 23:33:43.890598 1218502 logs.go:276] 0 containers: []
	W0731 23:33:43.890609 1218502 logs.go:278] No container was found matching "kube-scheduler"
	I0731 23:33:43.890617 1218502 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0731 23:33:43.890682 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0731 23:33:43.927257 1218502 cri.go:89] found id: ""
	I0731 23:33:43.927292 1218502 logs.go:276] 0 containers: []
	W0731 23:33:43.927303 1218502 logs.go:278] No container was found matching "kube-proxy"
	I0731 23:33:43.927311 1218502 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0731 23:33:43.927385 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0731 23:33:43.968583 1218502 cri.go:89] found id: ""
	I0731 23:33:43.968617 1218502 logs.go:276] 0 containers: []
	W0731 23:33:43.968631 1218502 logs.go:278] No container was found matching "kube-controller-manager"
	I0731 23:33:43.968640 1218502 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0731 23:33:43.968717 1218502 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0731 23:33:44.009749 1218502 cri.go:89] found id: ""
	I0731 23:33:44.009779 1218502 logs.go:276] 0 containers: []
	W0731 23:33:44.009789 1218502 logs.go:278] No container was found matching "kindnet"
	I0731 23:33:44.009803 1218502 logs.go:123] Gathering logs for kubelet ...
	I0731 23:33:44.009820 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0731 23:33:44.085894 1218502 logs.go:123] Gathering logs for dmesg ...
	I0731 23:33:44.085937 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0731 23:33:44.103743 1218502 logs.go:123] Gathering logs for describe nodes ...
	I0731 23:33:44.103799 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0731 23:33:44.237417 1218502 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0731 23:33:44.237451 1218502 logs.go:123] Gathering logs for CRI-O ...
	I0731 23:33:44.237468 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0731 23:33:44.365972 1218502 logs.go:123] Gathering logs for container status ...
	I0731 23:33:44.366027 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0731 23:33:44.407459 1218502 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0731 23:33:44.407513 1218502 out.go:239] * 
	* 
	W0731 23:33:44.407583 1218502 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 23:33:44.407605 1218502 out.go:239] * 
	* 
	W0731 23:33:44.408585 1218502 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0731 23:33:44.411637 1218502 out.go:177] 
	W0731 23:33:44.412983 1218502 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0731 23:33:44.413037 1218502 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0731 23:33:44.413064 1218502 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0731 23:33:44.414543 1218502 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-351764
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-351764: (1.33756135s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-351764 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-351764 status --format={{.Host}}: exit status 7 (70.718302ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.112230075s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-351764 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (87.54492ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-351764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-351764
	    minikube start -p kubernetes-upgrade-351764 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3517642 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-351764 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-351764 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.839893657s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-31 23:35:40.991955222 +0000 UTC m=+5969.861846595
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-351764 -n kubernetes-upgrade-351764
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-351764 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-351764 logs -n 25: (1.775756621s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771916 sudo crio            | cilium-771916             | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-771916                      | cilium-771916             | jenkins | v1.33.1 | 31 Jul 24 23:32 UTC | 31 Jul 24 23:32 UTC |
	| start   | -p cert-expiration-676954             | cert-expiration-676954    | jenkins | v1.33.1 | 31 Jul 24 23:33 UTC | 31 Jul 24 23:33 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-524949             | running-upgrade-524949    | jenkins | v1.33.1 | 31 Jul 24 23:33 UTC | 31 Jul 24 23:33 UTC |
	| start   | -p force-systemd-flag-351616          | force-systemd-flag-351616 | jenkins | v1.33.1 | 31 Jul 24 23:33 UTC | 31 Jul 24 23:34 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-741714                | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:33 UTC | 31 Jul 24 23:34 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-351764          | kubernetes-upgrade-351764 | jenkins | v1.33.1 | 31 Jul 24 23:33 UTC | 31 Jul 24 23:33 UTC |
	| start   | -p kubernetes-upgrade-351764          | kubernetes-upgrade-351764 | jenkins | v1.33.1 | 31 Jul 24 23:33 UTC | 31 Jul 24 23:34 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-741714                | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	| start   | -p NoKubernetes-741714                | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-351616 ssh cat     | force-systemd-flag-351616 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-351616          | force-systemd-flag-351616 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	| start   | -p cert-options-555856                | cert-options-555856       | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:35 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-351764          | kubernetes-upgrade-351764 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-351764          | kubernetes-upgrade-351764 | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:35 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-741714 sudo           | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-741714                | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:34 UTC |
	| start   | -p NoKubernetes-741714                | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:34 UTC | 31 Jul 24 23:35 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-555856 ssh               | cert-options-555856       | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC | 31 Jul 24 23:35 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-555856 -- sudo        | cert-options-555856       | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC | 31 Jul 24 23:35 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-555856                | cert-options-555856       | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC | 31 Jul 24 23:35 UTC |
	| start   | -p old-k8s-version-242296             | old-k8s-version-242296    | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-741714 sudo           | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-741714                | NoKubernetes-741714       | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC | 31 Jul 24 23:35 UTC |
	| start   | -p no-preload-459209 --memory=2200    | no-preload-459209         | jenkins | v1.33.1 | 31 Jul 24 23:35 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:35:39
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:35:39.938974 1226555 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:35:39.939159 1226555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:35:39.939173 1226555 out.go:304] Setting ErrFile to fd 2...
	I0731 23:35:39.939180 1226555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:35:39.939397 1226555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:35:39.940064 1226555 out.go:298] Setting JSON to false
	I0731 23:35:39.941599 1226555 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":26291,"bootTime":1722442649,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:35:39.941686 1226555 start.go:139] virtualization: kvm guest
	I0731 23:35:39.943868 1226555 out.go:177] * [no-preload-459209] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:35:39.945520 1226555 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:35:39.945576 1226555 notify.go:220] Checking for updates...
	I0731 23:35:39.947875 1226555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:35:39.949088 1226555 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:35:39.950300 1226555 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:35:39.951496 1226555 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:35:39.952579 1226555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:35:39.954207 1226555 config.go:182] Loaded profile config "cert-expiration-676954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:35:39.954306 1226555 config.go:182] Loaded profile config "kubernetes-upgrade-351764": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0731 23:35:39.954395 1226555 config.go:182] Loaded profile config "old-k8s-version-242296": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0731 23:35:39.954541 1226555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:35:39.998584 1226555 out.go:177] * Using the kvm2 driver based on user configuration
	I0731 23:35:39.999943 1226555 start.go:297] selected driver: kvm2
	I0731 23:35:39.999967 1226555 start.go:901] validating driver "kvm2" against <nil>
	I0731 23:35:39.999984 1226555 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:35:40.001035 1226555 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:35:40.001172 1226555 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:35:40.021627 1226555 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:35:40.021693 1226555 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 23:35:40.021955 1226555 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:35:40.021990 1226555 cni.go:84] Creating CNI manager for ""
	I0731 23:35:40.022001 1226555 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:35:40.022010 1226555 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 23:35:40.022085 1226555 start.go:340] cluster config:
	{Name:no-preload-459209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-459209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:35:40.022221 1226555 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:35:40.024669 1226555 out.go:177] * Starting "no-preload-459209" primary control-plane node in "no-preload-459209" cluster
	I0731 23:35:36.879577 1226277 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0731 23:35:36.879806 1226277 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:35:36.879857 1226277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:35:36.902047 1226277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35749
	I0731 23:35:36.902588 1226277 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:35:36.903194 1226277 main.go:141] libmachine: Using API Version  1
	I0731 23:35:36.903224 1226277 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:35:36.903633 1226277 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:35:36.903827 1226277 main.go:141] libmachine: (old-k8s-version-242296) Calling .GetMachineName
	I0731 23:35:36.904008 1226277 main.go:141] libmachine: (old-k8s-version-242296) Calling .DriverName
	I0731 23:35:36.904221 1226277 start.go:159] libmachine.API.Create for "old-k8s-version-242296" (driver="kvm2")
	I0731 23:35:36.904266 1226277 client.go:168] LocalClient.Create starting
	I0731 23:35:36.904311 1226277 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem
	I0731 23:35:36.904365 1226277 main.go:141] libmachine: Decoding PEM data...
	I0731 23:35:36.904388 1226277 main.go:141] libmachine: Parsing certificate...
	I0731 23:35:36.904462 1226277 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem
	I0731 23:35:36.904503 1226277 main.go:141] libmachine: Decoding PEM data...
	I0731 23:35:36.904522 1226277 main.go:141] libmachine: Parsing certificate...
	I0731 23:35:36.904543 1226277 main.go:141] libmachine: Running pre-create checks...
	I0731 23:35:36.904564 1226277 main.go:141] libmachine: (old-k8s-version-242296) Calling .PreCreateCheck
	I0731 23:35:36.907010 1226277 main.go:141] libmachine: (old-k8s-version-242296) Calling .GetConfigRaw
	I0731 23:35:36.907654 1226277 main.go:141] libmachine: Creating machine...
	I0731 23:35:36.907674 1226277 main.go:141] libmachine: (old-k8s-version-242296) Calling .Create
	I0731 23:35:36.907902 1226277 main.go:141] libmachine: (old-k8s-version-242296) Creating KVM machine...
	I0731 23:35:36.909343 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | found existing default KVM network
	I0731 23:35:36.911278 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:36.911059 1226317 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:20:ba:e6} reservation:<nil>}
	I0731 23:35:36.912665 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:36.912555 1226317 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:c4:6b} reservation:<nil>}
	I0731 23:35:36.914259 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:36.914126 1226317 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:41:98:01} reservation:<nil>}
	I0731 23:35:36.915913 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:36.915794 1226317 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000311780}
	I0731 23:35:36.915961 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | created network xml: 
	I0731 23:35:36.915980 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | <network>
	I0731 23:35:36.915998 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |   <name>mk-old-k8s-version-242296</name>
	I0731 23:35:36.916029 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |   <dns enable='no'/>
	I0731 23:35:36.916053 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |   
	I0731 23:35:36.916067 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0731 23:35:36.916082 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |     <dhcp>
	I0731 23:35:36.916123 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0731 23:35:36.916137 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |     </dhcp>
	I0731 23:35:36.916149 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |   </ip>
	I0731 23:35:36.916159 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG |   
	I0731 23:35:36.916170 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | </network>
	I0731 23:35:36.916182 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | 
	I0731 23:35:36.922452 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | trying to create private KVM network mk-old-k8s-version-242296 192.168.72.0/24...
	I0731 23:35:37.018108 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | private KVM network mk-old-k8s-version-242296 192.168.72.0/24 created
	I0731 23:35:37.018141 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting up store path in /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296 ...
	I0731 23:35:37.018164 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:37.018056 1226317 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:35:37.018179 1226277 main.go:141] libmachine: (old-k8s-version-242296) Building disk image from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 23:35:37.018205 1226277 main.go:141] libmachine: (old-k8s-version-242296) Downloading /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0731 23:35:37.307606 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:37.307346 1226317 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296/id_rsa...
	I0731 23:35:37.469709 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:37.469517 1226317 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296/old-k8s-version-242296.rawdisk...
	I0731 23:35:37.469757 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Writing magic tar header
	I0731 23:35:37.469778 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Writing SSH key tar header
	I0731 23:35:37.469792 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:37.469654 1226317 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296 ...
	I0731 23:35:37.469809 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296
	I0731 23:35:37.469828 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines
	I0731 23:35:37.469847 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296 (perms=drwx------)
	I0731 23:35:37.469866 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:35:37.469885 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19312-1172186
	I0731 23:35:37.469900 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube/machines (perms=drwxr-xr-x)
	I0731 23:35:37.469910 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0731 23:35:37.469921 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186/.minikube (perms=drwxr-xr-x)
	I0731 23:35:37.469934 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home/jenkins
	I0731 23:35:37.469948 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Checking permissions on dir: /home
	I0731 23:35:37.469964 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting executable bit set on /home/jenkins/minikube-integration/19312-1172186 (perms=drwxrwxr-x)
	I0731 23:35:37.469973 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | Skipping /home - not owner
	I0731 23:35:37.469988 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0731 23:35:37.470005 1226277 main.go:141] libmachine: (old-k8s-version-242296) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0731 23:35:37.470019 1226277 main.go:141] libmachine: (old-k8s-version-242296) Creating domain...
	I0731 23:35:37.471417 1226277 main.go:141] libmachine: (old-k8s-version-242296) define libvirt domain using xml: 
	I0731 23:35:37.471455 1226277 main.go:141] libmachine: (old-k8s-version-242296) <domain type='kvm'>
	I0731 23:35:37.471466 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <name>old-k8s-version-242296</name>
	I0731 23:35:37.471482 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <memory unit='MiB'>2200</memory>
	I0731 23:35:37.471497 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <vcpu>2</vcpu>
	I0731 23:35:37.471515 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <features>
	I0731 23:35:37.471562 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <acpi/>
	I0731 23:35:37.471593 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <apic/>
	I0731 23:35:37.471609 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <pae/>
	I0731 23:35:37.471625 1226277 main.go:141] libmachine: (old-k8s-version-242296)     
	I0731 23:35:37.471639 1226277 main.go:141] libmachine: (old-k8s-version-242296)   </features>
	I0731 23:35:37.471649 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <cpu mode='host-passthrough'>
	I0731 23:35:37.471663 1226277 main.go:141] libmachine: (old-k8s-version-242296)   
	I0731 23:35:37.471672 1226277 main.go:141] libmachine: (old-k8s-version-242296)   </cpu>
	I0731 23:35:37.471681 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <os>
	I0731 23:35:37.471690 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <type>hvm</type>
	I0731 23:35:37.471720 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <boot dev='cdrom'/>
	I0731 23:35:37.471739 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <boot dev='hd'/>
	I0731 23:35:37.471752 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <bootmenu enable='no'/>
	I0731 23:35:37.471774 1226277 main.go:141] libmachine: (old-k8s-version-242296)   </os>
	I0731 23:35:37.471786 1226277 main.go:141] libmachine: (old-k8s-version-242296)   <devices>
	I0731 23:35:37.471797 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <disk type='file' device='cdrom'>
	I0731 23:35:37.471816 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296/boot2docker.iso'/>
	I0731 23:35:37.471828 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <target dev='hdc' bus='scsi'/>
	I0731 23:35:37.471839 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <readonly/>
	I0731 23:35:37.471846 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </disk>
	I0731 23:35:37.471870 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <disk type='file' device='disk'>
	I0731 23:35:37.471890 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0731 23:35:37.471913 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <source file='/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/old-k8s-version-242296/old-k8s-version-242296.rawdisk'/>
	I0731 23:35:37.471925 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <target dev='hda' bus='virtio'/>
	I0731 23:35:37.471939 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </disk>
	I0731 23:35:37.471950 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <interface type='network'>
	I0731 23:35:37.471965 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <source network='mk-old-k8s-version-242296'/>
	I0731 23:35:37.471976 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <model type='virtio'/>
	I0731 23:35:37.471989 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </interface>
	I0731 23:35:37.471997 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <interface type='network'>
	I0731 23:35:37.472014 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <source network='default'/>
	I0731 23:35:37.472025 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <model type='virtio'/>
	I0731 23:35:37.472038 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </interface>
	I0731 23:35:37.472050 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <serial type='pty'>
	I0731 23:35:37.472060 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <target port='0'/>
	I0731 23:35:37.472070 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </serial>
	I0731 23:35:37.472078 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <console type='pty'>
	I0731 23:35:37.472117 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <target type='serial' port='0'/>
	I0731 23:35:37.472129 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </console>
	I0731 23:35:37.472138 1226277 main.go:141] libmachine: (old-k8s-version-242296)     <rng model='virtio'>
	I0731 23:35:37.472147 1226277 main.go:141] libmachine: (old-k8s-version-242296)       <backend model='random'>/dev/random</backend>
	I0731 23:35:37.472157 1226277 main.go:141] libmachine: (old-k8s-version-242296)     </rng>
	I0731 23:35:37.472164 1226277 main.go:141] libmachine: (old-k8s-version-242296)     
	I0731 23:35:37.472174 1226277 main.go:141] libmachine: (old-k8s-version-242296)     
	I0731 23:35:37.472186 1226277 main.go:141] libmachine: (old-k8s-version-242296)   </devices>
	I0731 23:35:37.472195 1226277 main.go:141] libmachine: (old-k8s-version-242296) </domain>
	I0731 23:35:37.472207 1226277 main.go:141] libmachine: (old-k8s-version-242296) 
	I0731 23:35:37.478028 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:64:23:de in network default
	I0731 23:35:37.478808 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:ea:bb:1f in network mk-old-k8s-version-242296
	I0731 23:35:37.478826 1226277 main.go:141] libmachine: (old-k8s-version-242296) Ensuring networks are active...
	I0731 23:35:37.479806 1226277 main.go:141] libmachine: (old-k8s-version-242296) Ensuring network default is active
	I0731 23:35:37.480308 1226277 main.go:141] libmachine: (old-k8s-version-242296) Ensuring network mk-old-k8s-version-242296 is active
	I0731 23:35:37.480987 1226277 main.go:141] libmachine: (old-k8s-version-242296) Getting domain xml...
	I0731 23:35:37.482040 1226277 main.go:141] libmachine: (old-k8s-version-242296) Creating domain...
	I0731 23:35:38.956024 1226277 main.go:141] libmachine: (old-k8s-version-242296) Waiting to get IP...
	I0731 23:35:38.956817 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:ea:bb:1f in network mk-old-k8s-version-242296
	I0731 23:35:38.957281 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | unable to find current IP address of domain old-k8s-version-242296 in network mk-old-k8s-version-242296
	I0731 23:35:38.957315 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:38.957264 1226317 retry.go:31] will retry after 264.708743ms: waiting for machine to come up
	I0731 23:35:39.223921 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:ea:bb:1f in network mk-old-k8s-version-242296
	I0731 23:35:39.224625 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | unable to find current IP address of domain old-k8s-version-242296 in network mk-old-k8s-version-242296
	I0731 23:35:39.224652 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:39.224576 1226317 retry.go:31] will retry after 323.941327ms: waiting for machine to come up
	I0731 23:35:39.793929 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:ea:bb:1f in network mk-old-k8s-version-242296
	I0731 23:35:39.794846 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | unable to find current IP address of domain old-k8s-version-242296 in network mk-old-k8s-version-242296
	I0731 23:35:39.794875 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:39.794769 1226317 retry.go:31] will retry after 373.080802ms: waiting for machine to come up
	I0731 23:35:40.169524 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:ea:bb:1f in network mk-old-k8s-version-242296
	I0731 23:35:40.170137 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | unable to find current IP address of domain old-k8s-version-242296 in network mk-old-k8s-version-242296
	I0731 23:35:40.170206 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:40.170127 1226317 retry.go:31] will retry after 367.743444ms: waiting for machine to come up
	I0731 23:35:40.539798 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | domain old-k8s-version-242296 has defined MAC address 52:54:00:ea:bb:1f in network mk-old-k8s-version-242296
	I0731 23:35:40.540316 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | unable to find current IP address of domain old-k8s-version-242296 in network mk-old-k8s-version-242296
	I0731 23:35:40.540341 1226277 main.go:141] libmachine: (old-k8s-version-242296) DBG | I0731 23:35:40.540273 1226317 retry.go:31] will retry after 737.468613ms: waiting for machine to come up
	I0731 23:35:39.798127 1225366 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:35:39.798149 1225366 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0731 23:35:39.798173 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:35:39.801897 1225366 api_server.go:72] duration metric: took 254.937996ms to wait for apiserver process to appear ...
	I0731 23:35:39.801928 1225366 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:35:39.801955 1225366 api_server.go:253] Checking apiserver healthz at https://192.168.39.228:8443/healthz ...
	I0731 23:35:39.802426 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:35:39.803015 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:34:12 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:35:39.803051 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:35:39.803335 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:35:39.803569 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:35:39.803755 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:35:39.803943 1225366 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa Username:docker}
	I0731 23:35:39.809816 1225366 api_server.go:279] https://192.168.39.228:8443/healthz returned 200:
	ok
	I0731 23:35:39.812511 1225366 api_server.go:141] control plane version: v1.31.0-beta.0
	I0731 23:35:39.812545 1225366 api_server.go:131] duration metric: took 10.608806ms to wait for apiserver health ...
	I0731 23:35:39.812558 1225366 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:35:39.818351 1225366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37219
	I0731 23:35:39.819167 1225366 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:35:39.819814 1225366 main.go:141] libmachine: Using API Version  1
	I0731 23:35:39.819836 1225366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:35:39.820464 1225366 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:35:39.820943 1225366 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:35:39.820986 1225366 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:35:39.827735 1225366 system_pods.go:59] 8 kube-system pods found
	I0731 23:35:39.827777 1225366 system_pods.go:61] "coredns-5cfdc65f69-2jkkc" [cc359633-df19-4b20-a61d-b4facc30e4dd] Running
	I0731 23:35:39.827783 1225366 system_pods.go:61] "coredns-5cfdc65f69-m2fqx" [c23be1f5-e707-42f9-ae36-c870a2d48b2e] Running
	I0731 23:35:39.827793 1225366 system_pods.go:61] "etcd-kubernetes-upgrade-351764" [3d9e73d2-f37c-428f-b693-8da9af9e0c76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0731 23:35:39.827802 1225366 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-351764" [9f43634a-43ca-409b-82af-04e0649fed2d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0731 23:35:39.827816 1225366 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-351764" [7d14599d-7d69-4e12-b6b6-b0903d7dad89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0731 23:35:39.827823 1225366 system_pods.go:61] "kube-proxy-68j6x" [242e4da8-a69d-4691-ad2a-6739e5e302da] Running
	I0731 23:35:39.827831 1225366 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-351764" [5a108197-3279-4554-866e-217215fe512c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0731 23:35:39.827838 1225366 system_pods.go:61] "storage-provisioner" [84de5627-df72-443c-8467-80eb943d7d80] Running
	I0731 23:35:39.827849 1225366 system_pods.go:74] duration metric: took 15.282298ms to wait for pod list to return data ...
	I0731 23:35:39.827869 1225366 kubeadm.go:582] duration metric: took 280.91598ms to wait for: map[apiserver:true system_pods:true]
	I0731 23:35:39.827888 1225366 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:35:39.832477 1225366 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:35:39.832507 1225366 node_conditions.go:123] node cpu capacity is 2
	I0731 23:35:39.832519 1225366 node_conditions.go:105] duration metric: took 4.625531ms to run NodePressure ...
	I0731 23:35:39.832533 1225366 start.go:241] waiting for startup goroutines ...
	I0731 23:35:39.838936 1225366 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38497
	I0731 23:35:39.839531 1225366 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:35:39.840329 1225366 main.go:141] libmachine: Using API Version  1
	I0731 23:35:39.840362 1225366 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:35:39.840710 1225366 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:35:39.840950 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetState
	I0731 23:35:39.842970 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .DriverName
	I0731 23:35:39.843296 1225366 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0731 23:35:39.843315 1225366 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0731 23:35:39.843338 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHHostname
	I0731 23:35:39.847118 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:35:39.847743 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:b4:6e", ip: ""} in network mk-kubernetes-upgrade-351764: {Iface:virbr1 ExpiryTime:2024-08-01 00:34:12 +0000 UTC Type:0 Mac:52:54:00:52:b4:6e Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:kubernetes-upgrade-351764 Clientid:01:52:54:00:52:b4:6e}
	I0731 23:35:39.847783 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | domain kubernetes-upgrade-351764 has defined IP address 192.168.39.228 and MAC address 52:54:00:52:b4:6e in network mk-kubernetes-upgrade-351764
	I0731 23:35:39.847950 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHPort
	I0731 23:35:39.848185 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHKeyPath
	I0731 23:35:39.848357 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .GetSSHUsername
	I0731 23:35:39.848536 1225366 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/kubernetes-upgrade-351764/id_rsa Username:docker}
	I0731 23:35:39.940672 1225366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0731 23:35:39.983355 1225366 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0731 23:35:40.805078 1225366 main.go:141] libmachine: Making call to close driver server
	I0731 23:35:40.805103 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .Close
	I0731 23:35:40.805228 1225366 main.go:141] libmachine: Making call to close driver server
	I0731 23:35:40.805252 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .Close
	I0731 23:35:40.805439 1225366 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:35:40.805461 1225366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:35:40.805476 1225366 main.go:141] libmachine: Making call to close driver server
	I0731 23:35:40.805484 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .Close
	I0731 23:35:40.805629 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Closing plugin on server side
	I0731 23:35:40.805665 1225366 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:35:40.805674 1225366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:35:40.805691 1225366 main.go:141] libmachine: Making call to close driver server
	I0731 23:35:40.805698 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .Close
	I0731 23:35:40.805797 1225366 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:35:40.805816 1225366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:35:40.805851 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Closing plugin on server side
	I0731 23:35:40.807602 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Closing plugin on server side
	I0731 23:35:40.807694 1225366 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:35:40.807729 1225366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:35:40.908960 1225366 main.go:141] libmachine: Making call to close driver server
	I0731 23:35:40.908991 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) Calling .Close
	I0731 23:35:40.909319 1225366 main.go:141] libmachine: Successfully made call to close driver server
	I0731 23:35:40.909340 1225366 main.go:141] libmachine: Making call to close connection to plugin binary
	I0731 23:35:40.909340 1225366 main.go:141] libmachine: (kubernetes-upgrade-351764) DBG | Closing plugin on server side
	I0731 23:35:40.911805 1225366 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0731 23:35:40.913138 1225366 addons.go:510] duration metric: took 1.366182241s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0731 23:35:40.913190 1225366 start.go:246] waiting for cluster config update ...
	I0731 23:35:40.913205 1225366 start.go:255] writing updated cluster config ...
	I0731 23:35:40.913529 1225366 ssh_runner.go:195] Run: rm -f paused
	I0731 23:35:40.972780 1225366 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0731 23:35:40.974316 1225366 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-351764" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.871604177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468941871573741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61dfd24e-4813-420b-a8e0-21e9a994a9ed name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.875253067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4595bc11-82e2-424e-8104-21709653798c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.875324342Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4595bc11-82e2-424e-8104-21709653798c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.876256010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c0b3a50e70cb8a58373a4ddfd513d973d7bdeb6e500037a2509dd0520f8e9b5,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722468937892843943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d27584471755deb1a0d524450083af4f30154b89e055ad605a2daa302801bfd,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722468937877607827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7569fc76d20f561f52eb204201ebfd3a9fdfa2950f4d4e401aa9d0187948ae,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722468935055454908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9356f29f6de0f9841db89eb524b11e1c751e1d7588c7c1c483153717ac486bc,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722468935058045604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973341536b2407e29ab006f5a4b851303126d75ef4deed7c3df7096e4ca02c74,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722468935019766751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef16a916946e213dd1789ba3894589ce5a420ba52a6e8d72ccb42ce49711ebd,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722468931841768343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823d0810af4832afd5ce327457cd9d1f2cce267e9ee568fce887b67d03ad6a49,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468920509869458,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997d2ca83f51acfadb8a5eac21559c4fad2e8d5ed3ea9f128c0feab2215dbbc,PodSandboxId:e194f79a03500f54b75df56e4bd1b687c18e3260d1d987366a4dea3e95f5bace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921474452663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c224f5e822cb3257d66b19a5e4947105f8ec8c4ac25b1b6572843d56e295bca,PodSandboxId:00a3b0f7e19c08cca102c5f0717ab4c47a871ea72b7eac95efaa35d2abffd1ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921203965765,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf8b4ab43bbd8daac1c7c4cf438e22349dcbc46856c4088860ad52f76186e1,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722468920099338832,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66512805a60138ea2e92513fb8535c395393a0dd8f2b52b9637ca52816030fe,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722468919923628314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454be4035cfa07a6c877f3682cf1f807725976848c9bf4f1cb000d52d8595dbb,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722468919897153763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb0659ad41cd7f16d4ce88cfa758ce368acd9c97785c80ed8c80146319af260,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722468919845509504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kub
e-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06445c26106e9367325ec2fa2c5e7f7927c91f6600c88eef52852d34ae7e15a,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722468919645574383,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42f9fb65d2b120e563124b4a8b8e52b4cd8ea4f66e15f50b074634feebee44e,PodSandboxId:37c7e74537fd45e23d829bd6f89a512f1dfd76c54b5da8bbdcec02f43704d9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883295689792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2
jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551185fa98bc0b316360508593de616af86676ad9e0245e2c35019b0ae318962,PodSandboxId:beedfde75c7432e8d28e2a38d432984573ad652251086f4856c3b73538e6deb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883300151272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4595bc11-82e2-424e-8104-21709653798c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.984316236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=088e55a5-618f-4ddb-b144-c533a5ca057b name=/runtime.v1.RuntimeService/Version
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.984402598Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=088e55a5-618f-4ddb-b144-c533a5ca057b name=/runtime.v1.RuntimeService/Version
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.986336727Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59883024-d768-4254-8f59-9f137c4f10c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.987170326Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468941987131390,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59883024-d768-4254-8f59-9f137c4f10c1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.987947111Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b45caaeb-4296-4571-88f4-787584841278 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.988005821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b45caaeb-4296-4571-88f4-787584841278 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:41 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:41.988331760Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c0b3a50e70cb8a58373a4ddfd513d973d7bdeb6e500037a2509dd0520f8e9b5,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722468937892843943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d27584471755deb1a0d524450083af4f30154b89e055ad605a2daa302801bfd,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722468937877607827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7569fc76d20f561f52eb204201ebfd3a9fdfa2950f4d4e401aa9d0187948ae,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722468935055454908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9356f29f6de0f9841db89eb524b11e1c751e1d7588c7c1c483153717ac486bc,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722468935058045604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973341536b2407e29ab006f5a4b851303126d75ef4deed7c3df7096e4ca02c74,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722468935019766751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef16a916946e213dd1789ba3894589ce5a420ba52a6e8d72ccb42ce49711ebd,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722468931841768343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823d0810af4832afd5ce327457cd9d1f2cce267e9ee568fce887b67d03ad6a49,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468920509869458,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997d2ca83f51acfadb8a5eac21559c4fad2e8d5ed3ea9f128c0feab2215dbbc,PodSandboxId:e194f79a03500f54b75df56e4bd1b687c18e3260d1d987366a4dea3e95f5bace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921474452663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c224f5e822cb3257d66b19a5e4947105f8ec8c4ac25b1b6572843d56e295bca,PodSandboxId:00a3b0f7e19c08cca102c5f0717ab4c47a871ea72b7eac95efaa35d2abffd1ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921203965765,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf8b4ab43bbd8daac1c7c4cf438e22349dcbc46856c4088860ad52f76186e1,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722468920099338832,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66512805a60138ea2e92513fb8535c395393a0dd8f2b52b9637ca52816030fe,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722468919923628314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454be4035cfa07a6c877f3682cf1f807725976848c9bf4f1cb000d52d8595dbb,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722468919897153763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb0659ad41cd7f16d4ce88cfa758ce368acd9c97785c80ed8c80146319af260,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722468919845509504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kub
e-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06445c26106e9367325ec2fa2c5e7f7927c91f6600c88eef52852d34ae7e15a,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722468919645574383,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42f9fb65d2b120e563124b4a8b8e52b4cd8ea4f66e15f50b074634feebee44e,PodSandboxId:37c7e74537fd45e23d829bd6f89a512f1dfd76c54b5da8bbdcec02f43704d9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883295689792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2
jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551185fa98bc0b316360508593de616af86676ad9e0245e2c35019b0ae318962,PodSandboxId:beedfde75c7432e8d28e2a38d432984573ad652251086f4856c3b73538e6deb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883300151272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b45caaeb-4296-4571-88f4-787584841278 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.034495754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c002379c-d031-478b-8c15-f91ba57f1ac4 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.034618991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c002379c-d031-478b-8c15-f91ba57f1ac4 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.035804283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a6e9914-44e7-4e15-a25e-eb7d2f1b54aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.036160904Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468942036139478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a6e9914-44e7-4e15-a25e-eb7d2f1b54aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.036793703Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f23cd3d2-c224-459c-b541-2641fe3baac3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.036852574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f23cd3d2-c224-459c-b541-2641fe3baac3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.037155870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c0b3a50e70cb8a58373a4ddfd513d973d7bdeb6e500037a2509dd0520f8e9b5,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722468937892843943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d27584471755deb1a0d524450083af4f30154b89e055ad605a2daa302801bfd,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722468937877607827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7569fc76d20f561f52eb204201ebfd3a9fdfa2950f4d4e401aa9d0187948ae,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722468935055454908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9356f29f6de0f9841db89eb524b11e1c751e1d7588c7c1c483153717ac486bc,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722468935058045604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973341536b2407e29ab006f5a4b851303126d75ef4deed7c3df7096e4ca02c74,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722468935019766751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef16a916946e213dd1789ba3894589ce5a420ba52a6e8d72ccb42ce49711ebd,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722468931841768343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823d0810af4832afd5ce327457cd9d1f2cce267e9ee568fce887b67d03ad6a49,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468920509869458,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997d2ca83f51acfadb8a5eac21559c4fad2e8d5ed3ea9f128c0feab2215dbbc,PodSandboxId:e194f79a03500f54b75df56e4bd1b687c18e3260d1d987366a4dea3e95f5bace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921474452663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c224f5e822cb3257d66b19a5e4947105f8ec8c4ac25b1b6572843d56e295bca,PodSandboxId:00a3b0f7e19c08cca102c5f0717ab4c47a871ea72b7eac95efaa35d2abffd1ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921203965765,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf8b4ab43bbd8daac1c7c4cf438e22349dcbc46856c4088860ad52f76186e1,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722468920099338832,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66512805a60138ea2e92513fb8535c395393a0dd8f2b52b9637ca52816030fe,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722468919923628314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454be4035cfa07a6c877f3682cf1f807725976848c9bf4f1cb000d52d8595dbb,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722468919897153763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb0659ad41cd7f16d4ce88cfa758ce368acd9c97785c80ed8c80146319af260,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722468919845509504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kub
e-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06445c26106e9367325ec2fa2c5e7f7927c91f6600c88eef52852d34ae7e15a,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722468919645574383,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42f9fb65d2b120e563124b4a8b8e52b4cd8ea4f66e15f50b074634feebee44e,PodSandboxId:37c7e74537fd45e23d829bd6f89a512f1dfd76c54b5da8bbdcec02f43704d9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883295689792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2
jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551185fa98bc0b316360508593de616af86676ad9e0245e2c35019b0ae318962,PodSandboxId:beedfde75c7432e8d28e2a38d432984573ad652251086f4856c3b73538e6deb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883300151272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f23cd3d2-c224-459c-b541-2641fe3baac3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.074155266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=20d00783-0d30-49f7-9a91-fa8699a559b2 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.074243798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=20d00783-0d30-49f7-9a91-fa8699a559b2 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.080161298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=301bcfc0-89b2-4971-968e-debb070cb180 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.080730823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468942080700806,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=301bcfc0-89b2-4971-968e-debb070cb180 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.082329962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16bcd048-760c-4f70-b96b-bcad6e453f37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.082395509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16bcd048-760c-4f70-b96b-bcad6e453f37 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:35:42 kubernetes-upgrade-351764 crio[2342]: time="2024-07-31 23:35:42.082759543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5c0b3a50e70cb8a58373a4ddfd513d973d7bdeb6e500037a2509dd0520f8e9b5,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_RUNNING,CreatedAt:1722468937892843943,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d27584471755deb1a0d524450083af4f30154b89e055ad605a2daa302801bfd,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722468937877607827,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb7569fc76d20f561f52eb204201ebfd3a9fdfa2950f4d4e401aa9d0187948ae,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722468935055454908,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9356f29f6de0f9841db89eb524b11e1c751e1d7588c7c1c483153717ac486bc,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722468935058045604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973341536b2407e29ab006f5a4b851303126d75ef4deed7c3df7096e4ca02c74,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722468935019766751,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef16a916946e213dd1789ba3894589ce5a420ba52a6e8d72ccb42ce49711ebd,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722468931841768343,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:823d0810af4832afd5ce327457cd9d1f2cce267e9ee568fce887b67d03ad6a49,PodSandboxId:9fca0664314582058f7f8387f9594aeffdca8f44bda60ee67b0b2160a432b237,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722468920509869458,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84de5627-df72-443c-8467-80eb943d7d80,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuberne
tes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5997d2ca83f51acfadb8a5eac21559c4fad2e8d5ed3ea9f128c0feab2215dbbc,PodSandboxId:e194f79a03500f54b75df56e4bd1b687c18e3260d1d987366a4dea3e95f5bace,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921474452663,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\
"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c224f5e822cb3257d66b19a5e4947105f8ec8c4ac25b1b6572843d56e295bca,PodSandboxId:00a3b0f7e19c08cca102c5f0717ab4c47a871ea72b7eac95efaa35d2abffd1ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468921203965765,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]s
tring{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d6cf8b4ab43bbd8daac1c7c4cf438e22349dcbc46856c4088860ad52f76186e1,PodSandboxId:e94f2162fbf8a374b8b8cb91a2af9ad2b23b051750dd29478505c216910458af,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722468920099338832,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-68j6x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242e4da8-a69d-4691-ad2a-6739e5e302da,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c66512805a60138ea2e92513fb8535c395393a0dd8f2b52b9637ca52816030fe,PodSandboxId:e85b54070ae0922d77041eb05ae9dd8bf507afbda2092b73efc9cfe27699d3c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722468919923628314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name
: etcd-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6663245f93dcc97fb3ab83ad8e9c29bb,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:454be4035cfa07a6c877f3682cf1f807725976848c9bf4f1cb000d52d8595dbb,PodSandboxId:1f9cf04cdb39fa04ffff439c3f223da59ea81e680a9a35283fa7209a9d1c22de,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:1722468919897153763,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-
kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e71b6ec45b952278a9979a537305bd58,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb0659ad41cd7f16d4ce88cfa758ce368acd9c97785c80ed8c80146319af260,PodSandboxId:20c8ea1197045bd8f8432f4a89a8311ccfb29779486026b6afb7d7a4703725c0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722468919845509504,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kub
e-controller-manager-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0560ecee450709f23327212e4ee0602,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a06445c26106e9367325ec2fa2c5e7f7927c91f6600c88eef52852d34ae7e15a,PodSandboxId:4d46c0133101d7c86b0a02b9e6afc35209a8ec2596bb6837e8a258883dca7820,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722468919645574383,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-351764,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e90f3f39448f4bc12adc483ec8c47e2,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42f9fb65d2b120e563124b4a8b8e52b4cd8ea4f66e15f50b074634feebee44e,PodSandboxId:37c7e74537fd45e23d829bd6f89a512f1dfd76c54b5da8bbdcec02f43704d9bd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883295689792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-2
jkkc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc359633-df19-4b20-a61d-b4facc30e4dd,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:551185fa98bc0b316360508593de616af86676ad9e0245e2c35019b0ae318962,PodSandboxId:beedfde75c7432e8d28e2a38d432984573ad652251086f4856c3b73538e6deb5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468883300151272,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-m2fqx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c23be1f5-e707-42f9-ae36-c870a2d48b2e,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16bcd048-760c-4f70-b96b-bcad6e453f37 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5c0b3a50e70cb       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   4 seconds ago       Running             kube-proxy                2                   e94f2162fbf8a       kube-proxy-68j6x
	6d27584471755       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   9fca066431458       storage-provisioner
	f9356f29f6de0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago       Running             kube-apiserver            2                   1f9cf04cdb39f       kube-apiserver-kubernetes-upgrade-351764
	cb7569fc76d20       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago       Running             kube-controller-manager   2                   20c8ea1197045       kube-controller-manager-kubernetes-upgrade-351764
	973341536b240       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   7 seconds ago       Running             kube-scheduler            2                   4d46c0133101d       kube-scheduler-kubernetes-upgrade-351764
	9ef16a916946e       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   10 seconds ago      Running             etcd                      2                   e85b54070ae09       etcd-kubernetes-upgrade-351764
	5997d2ca83f51       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   1                   e194f79a03500       coredns-5cfdc65f69-2jkkc
	6c224f5e822cb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   20 seconds ago      Running             coredns                   1                   00a3b0f7e19c0       coredns-5cfdc65f69-m2fqx
	823d0810af483       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   21 seconds ago      Exited              storage-provisioner       2                   9fca066431458       storage-provisioner
	d6cf8b4ab43bb       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   22 seconds ago      Exited              kube-proxy                1                   e94f2162fbf8a       kube-proxy-68j6x
	c66512805a601       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   22 seconds ago      Exited              etcd                      1                   e85b54070ae09       etcd-kubernetes-upgrade-351764
	454be4035cfa0       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   22 seconds ago      Exited              kube-apiserver            1                   1f9cf04cdb39f       kube-apiserver-kubernetes-upgrade-351764
	7eb0659ad41cd       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   22 seconds ago      Exited              kube-controller-manager   1                   20c8ea1197045       kube-controller-manager-kubernetes-upgrade-351764
	a06445c26106e       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   22 seconds ago      Exited              kube-scheduler            1                   4d46c0133101d       kube-scheduler-kubernetes-upgrade-351764
	551185fa98bc0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   58 seconds ago      Exited              coredns                   0                   beedfde75c743       coredns-5cfdc65f69-m2fqx
	f42f9fb65d2b1       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   58 seconds ago      Exited              coredns                   0                   37c7e74537fd4       coredns-5cfdc65f69-2jkkc
	
	
	==> coredns [551185fa98bc0b316360508593de616af86676ad9e0245e2c35019b0ae318962] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5997d2ca83f51acfadb8a5eac21559c4fad2e8d5ed3ea9f128c0feab2215dbbc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38916->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2081428010]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 23:35:21.895) (total time: 10923ms):
	Trace[2081428010]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38916->10.96.0.1:443: read: connection reset by peer 10923ms (23:35:32.819)
	Trace[2081428010]: [10.923853064s] [10.923853064s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38916->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38922->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[708805695]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 23:35:21.895) (total time: 10923ms):
	Trace[708805695]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38922->10.96.0.1:443: read: connection reset by peer 10923ms (23:35:32.819)
	Trace[708805695]: [10.923571142s] [10.923571142s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38922->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38910->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1388343953]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 23:35:21.894) (total time: 10925ms):
	Trace[1388343953]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38910->10.96.0.1:443: read: connection reset by peer 10924ms (23:35:32.819)
	Trace[1388343953]: [10.925175627s] [10.925175627s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:38910->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [6c224f5e822cb3257d66b19a5e4947105f8ec8c4ac25b1b6572843d56e295bca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57262->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1969374090]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 23:35:21.869) (total time: 10949ms):
	Trace[1969374090]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57262->10.96.0.1:443: read: connection reset by peer 10948ms (23:35:32.818)
	Trace[1969374090]: [10.94986085s] [10.94986085s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57262->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57276->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1983018861]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 23:35:21.870) (total time: 10949ms):
	Trace[1983018861]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57276->10.96.0.1:443: read: connection reset by peer 10949ms (23:35:32.819)
	Trace[1983018861]: [10.949640428s] [10.949640428s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57276->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57260->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2077143757]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-Jul-2024 23:35:21.864) (total time: 10955ms):
	Trace[2077143757]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57260->10.96.0.1:443: read: connection reset by peer 10955ms (23:35:32.819)
	Trace[2077143757]: [10.955390763s] [10.955390763s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:57260->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> coredns [f42f9fb65d2b120e563124b4a8b8e52b4cd8ea4f66e15f50b074634feebee44e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-351764
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-351764
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:34:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-351764
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:35:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:35:37 +0000   Wed, 31 Jul 2024 23:34:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:35:37 +0000   Wed, 31 Jul 2024 23:34:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:35:37 +0000   Wed, 31 Jul 2024 23:34:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:35:37 +0000   Wed, 31 Jul 2024 23:34:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    kubernetes-upgrade-351764
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 efc02a5857574bd4ad7c0b620b90377a
	  System UUID:                efc02a58-5757-4bd4-ad7c-0b620b90377a
	  Boot ID:                    5cad101e-47a1-4bcf-aa00-6c4332c21812
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-2jkkc                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 coredns-5cfdc65f69-m2fqx                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     62s
	  kube-system                 etcd-kubernetes-upgrade-351764                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kube-apiserver-kubernetes-upgrade-351764             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-351764    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-68j6x                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-scheduler-kubernetes-upgrade-351764             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node kubernetes-upgrade-351764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node kubernetes-upgrade-351764 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node kubernetes-upgrade-351764 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           62s                node-controller  Node kubernetes-upgrade-351764 event: Registered Node kubernetes-upgrade-351764 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-351764 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-351764 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-351764 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-351764 event: Registered Node kubernetes-upgrade-351764 in Controller
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.430404] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.056076] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068558] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.171527] systemd-fstab-generator[593]: Ignoring "noauto" option for root device
	[  +0.163686] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.292779] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +4.313155] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.070299] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.048785] systemd-fstab-generator[853]: Ignoring "noauto" option for root device
	[  +9.531279] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +0.094294] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.051872] kauditd_printk_skb: 97 callbacks suppressed
	[Jul31 23:35] systemd-fstab-generator[2261]: Ignoring "noauto" option for root device
	[  +0.092774] kauditd_printk_skb: 9 callbacks suppressed
	[  +0.071997] systemd-fstab-generator[2273]: Ignoring "noauto" option for root device
	[  +0.188332] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.140827] systemd-fstab-generator[2299]: Ignoring "noauto" option for root device
	[  +0.329360] systemd-fstab-generator[2327]: Ignoring "noauto" option for root device
	[  +1.355304] systemd-fstab-generator[2480]: Ignoring "noauto" option for root device
	[  +3.723905] kauditd_printk_skb: 229 callbacks suppressed
	[ +11.733999] systemd-fstab-generator[3553]: Ignoring "noauto" option for root device
	[  +5.289542] systemd-fstab-generator[3963]: Ignoring "noauto" option for root device
	[  +0.117608] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [9ef16a916946e213dd1789ba3894589ce5a420ba52a6e8d72ccb42ce49711ebd] <==
	{"level":"info","ts":"2024-07-31T23:35:32.011197Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:35:32.011254Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:35:32.011264Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:35:32.011591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c switched to configuration voters=(1802090024170110220)"}
	{"level":"info","ts":"2024-07-31T23:35:32.011654Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be0e2aae0afb30be","local-member-id":"19024f543fef3d0c","added-peer-id":"19024f543fef3d0c","added-peer-peer-urls":["https://192.168.39.228:2380"]}
	{"level":"info","ts":"2024-07-31T23:35:32.01175Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be0e2aae0afb30be","local-member-id":"19024f543fef3d0c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:35:32.011786Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:35:32.016416Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.39.228:2380"}
	{"level":"info","ts":"2024-07-31T23:35:32.017324Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.39.228:2380"}
	{"level":"info","ts":"2024-07-31T23:35:33.385247Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:35:33.385288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:35:33.385314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c received MsgPreVoteResp from 19024f543fef3d0c at term 2"}
	{"level":"info","ts":"2024-07-31T23:35:33.385325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:35:33.385331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c received MsgVoteResp from 19024f543fef3d0c at term 3"}
	{"level":"info","ts":"2024-07-31T23:35:33.385339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:35:33.385346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 19024f543fef3d0c elected leader 19024f543fef3d0c at term 3"}
	{"level":"info","ts":"2024-07-31T23:35:33.389178Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:35:33.390003Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T23:35:33.39076Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.228:2379"}
	{"level":"info","ts":"2024-07-31T23:35:33.391262Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:35:33.391936Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-31T23:35:33.389133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"19024f543fef3d0c","local-member-attributes":"{Name:kubernetes-upgrade-351764 ClientURLs:[https://192.168.39.228:2379]}","request-path":"/0/members/19024f543fef3d0c/attributes","cluster-id":"be0e2aae0afb30be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:35:33.392844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-31T23:35:33.399582Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:35:33.399616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [c66512805a60138ea2e92513fb8535c395393a0dd8f2b52b9637ca52816030fe] <==
	{"level":"warn","ts":"2024-07-31T23:35:20.71399Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-07-31T23:35:20.714005Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.228:2380"]}
	{"level":"info","ts":"2024-07-31T23:35:20.714098Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:35:20.716608Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.228:2379"]}
	{"level":"info","ts":"2024-07-31T23:35:20.71689Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.14","git-sha":"bf51a53a7","go-version":"go1.21.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-351764","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.228:2380"],"listen-peer-urls":["https://192.168.39.228:2380"],"advertise-client-urls":["https://192.168.39.228:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.228:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new"
,"initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-07-31T23:35:20.765616Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"48.392217ms"}
	{"level":"info","ts":"2024-07-31T23:35:20.849204Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-31T23:35:20.907753Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"be0e2aae0afb30be","local-member-id":"19024f543fef3d0c","commit-index":433}
	{"level":"info","ts":"2024-07-31T23:35:20.907857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-31T23:35:20.907883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c became follower at term 2"}
	{"level":"info","ts":"2024-07-31T23:35:20.9079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 19024f543fef3d0c [peers: [], term: 2, commit: 433, applied: 0, lastindex: 433, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-31T23:35:20.976959Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-31T23:35:21.114849Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":420}
	{"level":"info","ts":"2024-07-31T23:35:21.381692Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-31T23:35:21.617418Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"19024f543fef3d0c","timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:35:21.617913Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"19024f543fef3d0c"}
	{"level":"info","ts":"2024-07-31T23:35:21.618189Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"19024f543fef3d0c","local-server-version":"3.5.14","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T23:35:21.61865Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T23:35:21.619594Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:35:21.6203Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:35:21.620336Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:35:21.622423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"19024f543fef3d0c switched to configuration voters=(1802090024170110220)"}
	{"level":"info","ts":"2024-07-31T23:35:21.622563Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"be0e2aae0afb30be","local-member-id":"19024f543fef3d0c","added-peer-id":"19024f543fef3d0c","added-peer-peer-urls":["https://192.168.39.228:2380"]}
	{"level":"info","ts":"2024-07-31T23:35:21.628289Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"be0e2aae0afb30be","local-member-id":"19024f543fef3d0c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:35:21.628383Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> kernel <==
	 23:35:42 up 1 min,  0 users,  load average: 1.68, 0.51, 0.18
	Linux kubernetes-upgrade-351764 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [454be4035cfa07a6c877f3682cf1f807725976848c9bf4f1cb000d52d8595dbb] <==
	I0731 23:35:20.804238       1 options.go:228] external host was not specified, using 192.168.39.228
	I0731 23:35:20.818866       1 server.go:142] Version: v1.31.0-beta.0
	I0731 23:35:20.818944       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:35:21.854167       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W0731 23:35:21.861798       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:21.861968       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0731 23:35:21.863827       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:35:21.877714       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0731 23:35:21.877757       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0731 23:35:21.878039       1 instance.go:231] Using reconciler: lease
	W0731 23:35:21.881166       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:22.862791       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:22.862883       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:22.881947       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:24.317337       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:24.404138       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:24.526759       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:26.870125       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:26.961279       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:27.450753       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:31.168154       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:35:31.407203       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [f9356f29f6de0f9841db89eb524b11e1c751e1d7588c7c1c483153717ac486bc] <==
	I0731 23:35:37.527158       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 23:35:37.528970       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:35:37.529037       1 policy_source.go:224] refreshing policies
	I0731 23:35:37.582818       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 23:35:37.583748       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:35:37.584161       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:35:37.590860       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0731 23:35:37.590928       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0731 23:35:37.591805       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0731 23:35:37.593670       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 23:35:37.593792       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 23:35:37.593941       1 aggregator.go:171] initial CRD sync complete...
	I0731 23:35:37.593984       1 autoregister_controller.go:144] Starting autoregister controller
	I0731 23:35:37.594009       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 23:35:37.594031       1 cache.go:39] Caches are synced for autoregister controller
	E0731 23:35:37.602358       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0731 23:35:37.604486       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:35:38.385948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:35:39.327065       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:35:39.348074       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:35:39.401503       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:35:39.498852       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:35:39.511424       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:35:40.645366       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:35:41.780857       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7eb0659ad41cd7f16d4ce88cfa758ce368acd9c97785c80ed8c80146319af260] <==
	I0731 23:35:22.000931       1 serving.go:386] Generated self-signed cert in-memory
	I0731 23:35:22.473699       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0731 23:35:22.473791       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:35:22.475206       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 23:35:22.475361       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0731 23:35:22.475371       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0731 23:35:22.475639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [cb7569fc76d20f561f52eb204201ebfd3a9fdfa2950f4d4e401aa9d0187948ae] <==
	I0731 23:35:41.762705       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0731 23:35:41.764834       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0731 23:35:41.765010       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-351764"
	I0731 23:35:41.766196       1 shared_informer.go:320] Caches are synced for deployment
	I0731 23:35:41.773933       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 23:35:41.809602       1 shared_informer.go:320] Caches are synced for namespace
	I0731 23:35:41.837147       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0731 23:35:41.839996       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0731 23:35:41.840261       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0731 23:35:41.840609       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0731 23:35:41.864244       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0731 23:35:41.869667       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 23:35:41.877666       1 shared_informer.go:320] Caches are synced for service account
	I0731 23:35:41.913132       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 23:35:41.995990       1 shared_informer.go:320] Caches are synced for expand
	I0731 23:35:42.013132       1 shared_informer.go:320] Caches are synced for stateful set
	I0731 23:35:42.014213       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 23:35:42.019252       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:35:42.019320       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 23:35:42.023113       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:35:42.023157       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:35:42.063234       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 23:35:42.063439       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 23:35:42.065795       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 23:35:42.075255       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [5c0b3a50e70cb8a58373a4ddfd513d973d7bdeb6e500037a2509dd0520f8e9b5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0731 23:35:38.095354       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0731 23:35:38.105138       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.39.228"]
	E0731 23:35:38.105330       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0731 23:35:38.142191       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0731 23:35:38.142232       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:35:38.142259       1 server_linux.go:170] "Using iptables Proxier"
	I0731 23:35:38.144987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0731 23:35:38.145495       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0731 23:35:38.145636       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:35:38.147260       1 config.go:197] "Starting service config controller"
	I0731 23:35:38.147317       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:35:38.147352       1 config.go:104] "Starting endpoint slice config controller"
	I0731 23:35:38.147368       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:35:38.148184       1 config.go:326] "Starting node config controller"
	I0731 23:35:38.148272       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:35:38.248482       1 shared_informer.go:320] Caches are synced for node config
	I0731 23:35:38.248576       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:35:38.248617       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d6cf8b4ab43bbd8daac1c7c4cf438e22349dcbc46856c4088860ad52f76186e1] <==
	
	
	==> kube-scheduler [973341536b2407e29ab006f5a4b851303126d75ef4deed7c3df7096e4ca02c74] <==
	I0731 23:35:35.647454       1 serving.go:386] Generated self-signed cert in-memory
	W0731 23:35:37.492271       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0731 23:35:37.492320       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:35:37.492330       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 23:35:37.492340       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 23:35:37.526994       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 23:35:37.527301       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:35:37.532075       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0731 23:35:37.532976       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0731 23:35:37.544055       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0731 23:35:37.537167       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0731 23:35:37.646766       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a06445c26106e9367325ec2fa2c5e7f7927c91f6600c88eef52852d34ae7e15a] <==
	I0731 23:35:21.834952       1 serving.go:386] Generated self-signed cert in-memory
	W0731 23:35:32.816992       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.39.228:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.228:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.228:54182->192.168.39.228:8443: read: connection reset by peer
	W0731 23:35:32.817020       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0731 23:35:32.817027       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0731 23:35:32.826951       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0731 23:35:32.827142       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0731 23:35:32.827185       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0731 23:35:32.829325       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0731 23:35:32.829558       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0731 23:35:32.829633       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 31 23:35:34 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:34.779671    3560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0560ecee450709f23327212e4ee0602-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-351764\" (UID: \"d0560ecee450709f23327212e4ee0602\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-351764"
	Jul 31 23:35:34 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:34.779689    3560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0560ecee450709f23327212e4ee0602-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-351764\" (UID: \"d0560ecee450709f23327212e4ee0602\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-351764"
	Jul 31 23:35:34 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:34.779707    3560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/6663245f93dcc97fb3ab83ad8e9c29bb-etcd-data\") pod \"etcd-kubernetes-upgrade-351764\" (UID: \"6663245f93dcc97fb3ab83ad8e9c29bb\") " pod="kube-system/etcd-kubernetes-upgrade-351764"
	Jul 31 23:35:34 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:34.891194    3560 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-351764"
	Jul 31 23:35:34 kubernetes-upgrade-351764 kubelet[3560]: E0731 23:35:34.892135    3560 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.228:8443: connect: connection refused" node="kubernetes-upgrade-351764"
	Jul 31 23:35:35 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:35.010641    3560 scope.go:117] "RemoveContainer" containerID="a06445c26106e9367325ec2fa2c5e7f7927c91f6600c88eef52852d34ae7e15a"
	Jul 31 23:35:35 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:35.016562    3560 scope.go:117] "RemoveContainer" containerID="454be4035cfa07a6c877f3682cf1f807725976848c9bf4f1cb000d52d8595dbb"
	Jul 31 23:35:35 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:35.017777    3560 scope.go:117] "RemoveContainer" containerID="7eb0659ad41cd7f16d4ce88cfa758ce368acd9c97785c80ed8c80146319af260"
	Jul 31 23:35:35 kubernetes-upgrade-351764 kubelet[3560]: E0731 23:35:35.174445    3560 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-351764?timeout=10s\": dial tcp 192.168.39.228:8443: connect: connection refused" interval="800ms"
	Jul 31 23:35:35 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:35.293703    3560 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-351764"
	Jul 31 23:35:35 kubernetes-upgrade-351764 kubelet[3560]: E0731 23:35:35.294398    3560 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.228:8443: connect: connection refused" node="kubernetes-upgrade-351764"
	Jul 31 23:35:36 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:36.097042    3560 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-351764"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.552330    3560 apiserver.go:52] "Watching apiserver"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.571951    3560 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.634896    3560 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-351764"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.635151    3560 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-351764"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.635247    3560 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.636601    3560 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.666894    3560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/242e4da8-a69d-4691-ad2a-6739e5e302da-lib-modules\") pod \"kube-proxy-68j6x\" (UID: \"242e4da8-a69d-4691-ad2a-6739e5e302da\") " pod="kube-system/kube-proxy-68j6x"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.666969    3560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/84de5627-df72-443c-8467-80eb943d7d80-tmp\") pod \"storage-provisioner\" (UID: \"84de5627-df72-443c-8467-80eb943d7d80\") " pod="kube-system/storage-provisioner"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.667253    3560 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/242e4da8-a69d-4691-ad2a-6739e5e302da-xtables-lock\") pod \"kube-proxy-68j6x\" (UID: \"242e4da8-a69d-4691-ad2a-6739e5e302da\") " pod="kube-system/kube-proxy-68j6x"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: E0731 23:35:37.817765    3560 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-351764\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-351764"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: E0731 23:35:37.820369    3560 kubelet.go:1900] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-351764\" already exists" pod="kube-system/etcd-kubernetes-upgrade-351764"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.864962    3560 scope.go:117] "RemoveContainer" containerID="823d0810af4832afd5ce327457cd9d1f2cce267e9ee568fce887b67d03ad6a49"
	Jul 31 23:35:37 kubernetes-upgrade-351764 kubelet[3560]: I0731 23:35:37.866497    3560 scope.go:117] "RemoveContainer" containerID="d6cf8b4ab43bbd8daac1c7c4cf438e22349dcbc46856c4088860ad52f76186e1"
	
	
	==> storage-provisioner [6d27584471755deb1a0d524450083af4f30154b89e055ad605a2daa302801bfd] <==
	I0731 23:35:38.002483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0731 23:35:38.014058       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0731 23:35:38.014154       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [823d0810af4832afd5ce327457cd9d1f2cce267e9ee568fce887b67d03ad6a49] <==
	I0731 23:35:21.973965       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0731 23:35:32.820878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-351764 -n kubernetes-upgrade-351764
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-351764 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-351764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-351764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-351764: (1.191758605s)
--- FAIL: TestKubernetesUpgrade (389.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-343154 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-343154 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.00894468s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-343154] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-343154" primary control-plane node in "pause-343154" cluster
	* Updating the running kvm2 "pause-343154" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-343154" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:31:03.379387 1219947 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:31:03.379534 1219947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:31:03.379549 1219947 out.go:304] Setting ErrFile to fd 2...
	I0731 23:31:03.379556 1219947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:31:03.379746 1219947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:31:03.380386 1219947 out.go:298] Setting JSON to false
	I0731 23:31:03.381562 1219947 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":26014,"bootTime":1722442649,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:31:03.381637 1219947 start.go:139] virtualization: kvm guest
	I0731 23:31:03.383931 1219947 out.go:177] * [pause-343154] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:31:03.385632 1219947 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:31:03.385654 1219947 notify.go:220] Checking for updates...
	I0731 23:31:03.388395 1219947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:31:03.389970 1219947 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:31:03.391320 1219947 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:31:03.392720 1219947 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:31:03.394073 1219947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:31:03.396246 1219947 config.go:182] Loaded profile config "pause-343154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:31:03.396906 1219947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:03.396989 1219947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:03.418744 1219947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40091
	I0731 23:31:03.419349 1219947 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:03.420053 1219947 main.go:141] libmachine: Using API Version  1
	I0731 23:31:03.420108 1219947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:03.420588 1219947 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:03.420912 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:03.421252 1219947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:31:03.421707 1219947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:03.421756 1219947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:03.442275 1219947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33015
	I0731 23:31:03.442895 1219947 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:03.443825 1219947 main.go:141] libmachine: Using API Version  1
	I0731 23:31:03.443856 1219947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:03.444637 1219947 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:03.444879 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:03.495090 1219947 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 23:31:03.496572 1219947 start.go:297] selected driver: kvm2
	I0731 23:31:03.496597 1219947 start.go:901] validating driver "kvm2" against &{Name:pause-343154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-343154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:31:03.496763 1219947 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:31:03.497242 1219947 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:31:03.497351 1219947 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:31:03.516586 1219947 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:31:03.517727 1219947 cni.go:84] Creating CNI manager for ""
	I0731 23:31:03.517751 1219947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:31:03.517847 1219947 start.go:340] cluster config:
	{Name:pause-343154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-343154 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:31:03.518011 1219947 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:31:03.519999 1219947 out.go:177] * Starting "pause-343154" primary control-plane node in "pause-343154" cluster
	I0731 23:31:03.521337 1219947 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:31:03.521404 1219947 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0731 23:31:03.521422 1219947 cache.go:56] Caching tarball of preloaded images
	I0731 23:31:03.521547 1219947 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 23:31:03.521561 1219947 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0731 23:31:03.521748 1219947 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/config.json ...
	I0731 23:31:03.522017 1219947 start.go:360] acquireMachinesLock for pause-343154: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:31:14.416739 1219947 start.go:364] duration metric: took 10.894680997s to acquireMachinesLock for "pause-343154"
	I0731 23:31:14.416817 1219947 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:31:14.416827 1219947 fix.go:54] fixHost starting: 
	I0731 23:31:14.417259 1219947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:14.417327 1219947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:14.435887 1219947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0731 23:31:14.436468 1219947 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:14.437054 1219947 main.go:141] libmachine: Using API Version  1
	I0731 23:31:14.437082 1219947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:14.437452 1219947 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:14.437677 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:14.437868 1219947 main.go:141] libmachine: (pause-343154) Calling .GetState
	I0731 23:31:14.439692 1219947 fix.go:112] recreateIfNeeded on pause-343154: state=Running err=<nil>
	W0731 23:31:14.439719 1219947 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:31:14.441570 1219947 out.go:177] * Updating the running kvm2 "pause-343154" VM ...
	I0731 23:31:14.442906 1219947 machine.go:94] provisionDockerMachine start ...
	I0731 23:31:14.442949 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:14.443265 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:14.446304 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.446719 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:14.446754 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.446891 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:14.447096 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:14.447263 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:14.447422 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:14.447632 1219947 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:14.447903 1219947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I0731 23:31:14.447921 1219947 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:31:14.565624 1219947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-343154
	
	I0731 23:31:14.565668 1219947 main.go:141] libmachine: (pause-343154) Calling .GetMachineName
	I0731 23:31:14.565967 1219947 buildroot.go:166] provisioning hostname "pause-343154"
	I0731 23:31:14.565997 1219947 main.go:141] libmachine: (pause-343154) Calling .GetMachineName
	I0731 23:31:14.566214 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:14.569158 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.569469 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:14.569504 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.569697 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:14.569977 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:14.570203 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:14.570391 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:14.570582 1219947 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:14.570805 1219947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I0731 23:31:14.570826 1219947 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-343154 && echo "pause-343154" | sudo tee /etc/hostname
	I0731 23:31:14.705056 1219947 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-343154
	
	I0731 23:31:14.705093 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:14.708362 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.708716 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:14.708757 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.708970 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:14.709212 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:14.709407 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:14.709594 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:14.709803 1219947 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:14.710002 1219947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I0731 23:31:14.710025 1219947 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-343154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-343154/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-343154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:31:14.833904 1219947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:31:14.833976 1219947 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:31:14.834019 1219947 buildroot.go:174] setting up certificates
	I0731 23:31:14.834035 1219947 provision.go:84] configureAuth start
	I0731 23:31:14.834053 1219947 main.go:141] libmachine: (pause-343154) Calling .GetMachineName
	I0731 23:31:14.834385 1219947 main.go:141] libmachine: (pause-343154) Calling .GetIP
	I0731 23:31:14.837319 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.837796 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:14.837828 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.838051 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:14.840710 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.841117 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:14.841151 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:14.841359 1219947 provision.go:143] copyHostCerts
	I0731 23:31:14.841439 1219947 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:31:14.841454 1219947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:31:14.841521 1219947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:31:14.841655 1219947 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:31:14.841671 1219947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:31:14.841702 1219947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:31:14.841786 1219947 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:31:14.841796 1219947 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:31:14.841820 1219947 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:31:14.841888 1219947 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.pause-343154 san=[127.0.0.1 192.168.61.235 localhost minikube pause-343154]
	I0731 23:31:15.201687 1219947 provision.go:177] copyRemoteCerts
	I0731 23:31:15.201753 1219947 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:31:15.201782 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:15.204873 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:15.205423 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:15.205450 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:15.205705 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:15.205958 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:15.206219 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:15.206381 1219947 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/pause-343154/id_rsa Username:docker}
	I0731 23:31:15.297352 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0731 23:31:15.332386 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:31:15.362318 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0731 23:31:15.388188 1219947 provision.go:87] duration metric: took 554.131169ms to configureAuth
	I0731 23:31:15.388232 1219947 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:31:15.388541 1219947 config.go:182] Loaded profile config "pause-343154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:31:15.388653 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:15.391826 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:15.392312 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:15.392348 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:15.392627 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:15.392882 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:15.393159 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:15.393322 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:15.393546 1219947 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:15.393730 1219947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I0731 23:31:15.393749 1219947 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:31:21.071061 1219947 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:31:21.071093 1219947 machine.go:97] duration metric: took 6.628162148s to provisionDockerMachine
	I0731 23:31:21.071105 1219947 start.go:293] postStartSetup for "pause-343154" (driver="kvm2")
	I0731 23:31:21.071116 1219947 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:31:21.071134 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:21.071531 1219947 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:31:21.071575 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:21.074458 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.074925 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:21.074950 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.075232 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:21.075478 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:21.075654 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:21.075825 1219947 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/pause-343154/id_rsa Username:docker}
	I0731 23:31:21.163450 1219947 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:31:21.168489 1219947 info.go:137] Remote host: Buildroot 2023.02.9
	I0731 23:31:21.168528 1219947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:31:21.168612 1219947 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:31:21.168691 1219947 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:31:21.168784 1219947 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:31:21.179319 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:31:21.215175 1219947 start.go:296] duration metric: took 144.050041ms for postStartSetup
	I0731 23:31:21.215239 1219947 fix.go:56] duration metric: took 6.798412268s for fixHost
	I0731 23:31:21.215270 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:21.218720 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.219172 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:21.219201 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.219494 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:21.219726 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:21.219882 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:21.220055 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:21.220277 1219947 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:21.220500 1219947 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.235 22 <nil> <nil>}
	I0731 23:31:21.220514 1219947 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0731 23:31:21.333965 1219947 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468681.324578651
	
	I0731 23:31:21.333996 1219947 fix.go:216] guest clock: 1722468681.324578651
	I0731 23:31:21.334008 1219947 fix.go:229] Guest: 2024-07-31 23:31:21.324578651 +0000 UTC Remote: 2024-07-31 23:31:21.215245658 +0000 UTC m=+17.885252769 (delta=109.332993ms)
	I0731 23:31:21.334050 1219947 fix.go:200] guest clock delta is within tolerance: 109.332993ms
	I0731 23:31:21.334058 1219947 start.go:83] releasing machines lock for "pause-343154", held for 6.917267739s
	I0731 23:31:21.334086 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:21.334430 1219947 main.go:141] libmachine: (pause-343154) Calling .GetIP
	I0731 23:31:21.337610 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.338210 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:21.338238 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.338419 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:21.339126 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:21.339380 1219947 main.go:141] libmachine: (pause-343154) Calling .DriverName
	I0731 23:31:21.339512 1219947 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:31:21.339561 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:21.339716 1219947 ssh_runner.go:195] Run: cat /version.json
	I0731 23:31:21.339734 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHHostname
	I0731 23:31:21.342951 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.343294 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.343343 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:21.343365 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.343582 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:21.343832 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:21.343876 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:21.343903 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:21.344052 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:21.344175 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHPort
	I0731 23:31:21.344256 1219947 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/pause-343154/id_rsa Username:docker}
	I0731 23:31:21.344339 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHKeyPath
	I0731 23:31:21.344494 1219947 main.go:141] libmachine: (pause-343154) Calling .GetSSHUsername
	I0731 23:31:21.344613 1219947 sshutil.go:53] new ssh client: &{IP:192.168.61.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/pause-343154/id_rsa Username:docker}
	I0731 23:31:21.435246 1219947 ssh_runner.go:195] Run: systemctl --version
	I0731 23:31:21.454271 1219947 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:31:21.625224 1219947 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 23:31:21.633451 1219947 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:31:21.633556 1219947 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:31:21.647306 1219947 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0731 23:31:21.647347 1219947 start.go:495] detecting cgroup driver to use...
	I0731 23:31:21.647419 1219947 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:31:21.668618 1219947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:31:21.683923 1219947 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:31:21.683993 1219947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:31:21.698690 1219947 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:31:21.714420 1219947 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:31:21.877213 1219947 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:31:22.025880 1219947 docker.go:233] disabling docker service ...
	I0731 23:31:22.025973 1219947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:31:22.045508 1219947 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:31:22.061202 1219947 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:31:22.218024 1219947 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:31:22.372257 1219947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:31:22.389799 1219947 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:31:22.412936 1219947 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0731 23:31:22.413004 1219947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.424498 1219947 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:31:22.424570 1219947 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.436212 1219947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.447284 1219947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.458823 1219947 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:31:22.472578 1219947 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.487651 1219947 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.503760 1219947 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:22.515852 1219947 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:31:22.528328 1219947 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:31:22.540482 1219947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:22.732173 1219947 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:31:23.218556 1219947 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:31:23.218648 1219947 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:31:23.226377 1219947 start.go:563] Will wait 60s for crictl version
	I0731 23:31:23.226456 1219947 ssh_runner.go:195] Run: which crictl
	I0731 23:31:23.231644 1219947 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:31:23.273193 1219947 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0731 23:31:23.273279 1219947 ssh_runner.go:195] Run: crio --version
	I0731 23:31:23.309232 1219947 ssh_runner.go:195] Run: crio --version
	I0731 23:31:23.346848 1219947 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0731 23:31:23.348305 1219947 main.go:141] libmachine: (pause-343154) Calling .GetIP
	I0731 23:31:23.352322 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:23.352759 1219947 main.go:141] libmachine: (pause-343154) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:ba:d2", ip: ""} in network mk-pause-343154: {Iface:virbr3 ExpiryTime:2024-08-01 00:30:18 +0000 UTC Type:0 Mac:52:54:00:d7:ba:d2 Iaid: IPaddr:192.168.61.235 Prefix:24 Hostname:pause-343154 Clientid:01:52:54:00:d7:ba:d2}
	I0731 23:31:23.352790 1219947 main.go:141] libmachine: (pause-343154) DBG | domain pause-343154 has defined IP address 192.168.61.235 and MAC address 52:54:00:d7:ba:d2 in network mk-pause-343154
	I0731 23:31:23.353052 1219947 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0731 23:31:23.357880 1219947 kubeadm.go:883] updating cluster {Name:pause-343154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-343154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0731 23:31:23.358040 1219947 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0731 23:31:23.358104 1219947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:31:23.410260 1219947 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:31:23.410293 1219947 crio.go:433] Images already preloaded, skipping extraction
	I0731 23:31:23.410357 1219947 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:31:23.449388 1219947 crio.go:514] all images are preloaded for cri-o runtime.
	I0731 23:31:23.449414 1219947 cache_images.go:84] Images are preloaded, skipping loading
	I0731 23:31:23.449423 1219947 kubeadm.go:934] updating node { 192.168.61.235 8443 v1.30.3 crio true true} ...
	I0731 23:31:23.449565 1219947 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-343154 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.235
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-343154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0731 23:31:23.449657 1219947 ssh_runner.go:195] Run: crio config
	I0731 23:31:23.512440 1219947 cni.go:84] Creating CNI manager for ""
	I0731 23:31:23.512472 1219947 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:31:23.512487 1219947 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0731 23:31:23.512519 1219947 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.235 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-343154 NodeName:pause-343154 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.235"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.235 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0731 23:31:23.512722 1219947 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.235
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-343154"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.235
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.235"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0731 23:31:23.512827 1219947 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0731 23:31:23.524949 1219947 binaries.go:44] Found k8s binaries, skipping transfer
	I0731 23:31:23.525036 1219947 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0731 23:31:23.539166 1219947 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0731 23:31:23.560658 1219947 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0731 23:31:23.583813 1219947 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0731 23:31:23.602146 1219947 ssh_runner.go:195] Run: grep 192.168.61.235	control-plane.minikube.internal$ /etc/hosts
	I0731 23:31:23.606579 1219947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:23.750477 1219947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:31:23.773022 1219947 certs.go:68] Setting up /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154 for IP: 192.168.61.235
	I0731 23:31:23.773047 1219947 certs.go:194] generating shared ca certs ...
	I0731 23:31:23.773070 1219947 certs.go:226] acquiring lock for ca certs: {Name:mk2f2b6238a52b631df307178597d6663cc4b46e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:31:23.773268 1219947 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key
	I0731 23:31:23.773324 1219947 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key
	I0731 23:31:23.773338 1219947 certs.go:256] generating profile certs ...
	I0731 23:31:23.773450 1219947 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/client.key
	I0731 23:31:23.773536 1219947 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/apiserver.key.dd3e6ed3
	I0731 23:31:23.773587 1219947 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/proxy-client.key
	I0731 23:31:23.773741 1219947 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem (1338 bytes)
	W0731 23:31:23.773792 1219947 certs.go:480] ignoring /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400_empty.pem, impossibly tiny 0 bytes
	I0731 23:31:23.773806 1219947 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem (1675 bytes)
	I0731 23:31:23.773842 1219947 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem (1078 bytes)
	I0731 23:31:23.773879 1219947 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem (1123 bytes)
	I0731 23:31:23.773913 1219947 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem (1675 bytes)
	I0731 23:31:23.773970 1219947 certs.go:484] found cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:31:23.774672 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0731 23:31:23.809350 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0731 23:31:23.869505 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0731 23:31:23.920173 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0731 23:31:23.961268 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0731 23:31:23.999221 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0731 23:31:24.045581 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0731 23:31:24.083178 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/pause-343154/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0731 23:31:24.127806 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/1179400.pem --> /usr/share/ca-certificates/1179400.pem (1338 bytes)
	I0731 23:31:24.168932 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /usr/share/ca-certificates/11794002.pem (1708 bytes)
	I0731 23:31:24.216466 1219947 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0731 23:31:24.251023 1219947 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0731 23:31:24.271154 1219947 ssh_runner.go:195] Run: openssl version
	I0731 23:31:24.278888 1219947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1179400.pem && ln -fs /usr/share/ca-certificates/1179400.pem /etc/ssl/certs/1179400.pem"
	I0731 23:31:24.292201 1219947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1179400.pem
	I0731 23:31:24.297532 1219947 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 31 22:37 /usr/share/ca-certificates/1179400.pem
	I0731 23:31:24.297608 1219947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1179400.pem
	I0731 23:31:24.304414 1219947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1179400.pem /etc/ssl/certs/51391683.0"
	I0731 23:31:24.323967 1219947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11794002.pem && ln -fs /usr/share/ca-certificates/11794002.pem /etc/ssl/certs/11794002.pem"
	I0731 23:31:24.338053 1219947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11794002.pem
	I0731 23:31:24.343203 1219947 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 31 22:37 /usr/share/ca-certificates/11794002.pem
	I0731 23:31:24.343274 1219947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11794002.pem
	I0731 23:31:24.351279 1219947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11794002.pem /etc/ssl/certs/3ec20f2e.0"
	I0731 23:31:24.362946 1219947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0731 23:31:24.381215 1219947 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:31:24.387151 1219947 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 31 21:56 /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:31:24.387240 1219947 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0731 23:31:24.394344 1219947 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0731 23:31:24.413735 1219947 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0731 23:31:24.426288 1219947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0731 23:31:24.435612 1219947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0731 23:31:24.446201 1219947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0731 23:31:24.463212 1219947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0731 23:31:24.475815 1219947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0731 23:31:24.487480 1219947 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0731 23:31:24.494349 1219947 kubeadm.go:392] StartCluster: {Name:pause-343154 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-343154 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 23:31:24.494532 1219947 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0731 23:31:24.494604 1219947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0731 23:31:24.557527 1219947 cri.go:89] found id: "89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254"
	I0731 23:31:24.557554 1219947 cri.go:89] found id: "4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e"
	I0731 23:31:24.557559 1219947 cri.go:89] found id: "68276636487f98a685d7cd128f75baa041ba8690f9dde6d36fa227bd7a420830"
	I0731 23:31:24.557563 1219947 cri.go:89] found id: "1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199"
	I0731 23:31:24.557567 1219947 cri.go:89] found id: "b8e4f347a8c3c7f0fbca5f13a74b36c5a6e2977ed7505a4590226bec1524fd52"
	I0731 23:31:24.557571 1219947 cri.go:89] found id: "240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc"
	I0731 23:31:24.557574 1219947 cri.go:89] found id: "a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54"
	I0731 23:31:24.557578 1219947 cri.go:89] found id: ""
	I0731 23:31:24.557635 1219947 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-343154 -n pause-343154
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-343154 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-343154 logs -n 25: (1.541164657s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p test-preload-931367         | test-preload-931367       | jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:27 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| image   | test-preload-931367 image list | test-preload-931367       | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:27 UTC |
	| delete  | -p test-preload-931367         | test-preload-931367       | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:27 UTC |
	| start   | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:28 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC | 31 Jul 24 23:28 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC | 31 Jul 24 23:28 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:29 UTC |
	| start   | -p pause-343154 --memory=2048  | pause-343154              | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:31 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-329824         | offline-crio-329824       | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:30 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-351764   | kubernetes-upgrade-351764 | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-692279      | minikube                  | jenkins | v1.26.0 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:31 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-329824         | offline-crio-329824       | jenkins | v1.33.1 | 31 Jul 24 23:30 UTC | 31 Jul 24 23:30 UTC |
	| start   | -p running-upgrade-524949      | minikube                  | jenkins | v1.26.0 | 31 Jul 24 23:30 UTC | 31 Jul 24 23:31 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-343154                | pause-343154              | jenkins | v1.33.1 | 31 Jul 24 23:31 UTC | 31 Jul 24 23:31 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-692279 stop    | minikube                  | jenkins | v1.26.0 | 31 Jul 24 23:31 UTC | 31 Jul 24 23:31 UTC |
	| start   | -p stopped-upgrade-692279      | stopped-upgrade-692279    | jenkins | v1.33.1 | 31 Jul 24 23:31 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-524949      | running-upgrade-524949    | jenkins | v1.33.1 | 31 Jul 24 23:31 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:31:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:31:43.721369 1220421 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:31:43.721523 1220421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:31:43.721538 1220421 out.go:304] Setting ErrFile to fd 2...
	I0731 23:31:43.721545 1220421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:31:43.721749 1220421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:31:43.722426 1220421 out.go:298] Setting JSON to false
	I0731 23:31:43.723565 1220421 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":26055,"bootTime":1722442649,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:31:43.723645 1220421 start.go:139] virtualization: kvm guest
	I0731 23:31:43.726089 1220421 out.go:177] * [running-upgrade-524949] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:31:43.727503 1220421 notify.go:220] Checking for updates...
	I0731 23:31:43.727516 1220421 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:31:43.728887 1220421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:31:43.730101 1220421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:31:43.731299 1220421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:31:43.732419 1220421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:31:43.733624 1220421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:31:43.735188 1220421 config.go:182] Loaded profile config "running-upgrade-524949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 23:31:43.735823 1220421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:43.735900 1220421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:43.755408 1220421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0731 23:31:43.755949 1220421 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:43.756763 1220421 main.go:141] libmachine: Using API Version  1
	I0731 23:31:43.756801 1220421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:43.757284 1220421 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:43.757547 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.759713 1220421 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 23:31:43.761062 1220421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:31:43.761578 1220421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:43.761666 1220421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:43.782819 1220421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0731 23:31:43.788588 1220421 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:43.789441 1220421 main.go:141] libmachine: Using API Version  1
	I0731 23:31:43.789473 1220421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:43.789910 1220421 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:43.790142 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.834044 1220421 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 23:31:43.835255 1220421 start.go:297] selected driver: kvm2
	I0731 23:31:43.835285 1220421 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-524949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-524
949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.53 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 23:31:43.835447 1220421 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:31:43.836510 1220421 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:31:43.836636 1220421 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:31:43.860685 1220421 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:31:43.861219 1220421 cni.go:84] Creating CNI manager for ""
	I0731 23:31:43.861237 1220421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:31:43.861298 1220421 start.go:340] cluster config:
	{Name:running-upgrade-524949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-524949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.53 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 23:31:43.861447 1220421 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:31:43.863117 1220421 out.go:177] * Starting "running-upgrade-524949" primary control-plane node in "running-upgrade-524949" cluster
	I0731 23:31:43.864235 1220421 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0731 23:31:43.864312 1220421 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0731 23:31:43.864327 1220421 cache.go:56] Caching tarball of preloaded images
	I0731 23:31:43.864510 1220421 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 23:31:43.864527 1220421 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0731 23:31:43.864657 1220421 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/running-upgrade-524949/config.json ...
	I0731 23:31:43.864985 1220421 start.go:360] acquireMachinesLock for running-upgrade-524949: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:31:43.865073 1220421 start.go:364] duration metric: took 54.015µs to acquireMachinesLock for "running-upgrade-524949"
	I0731 23:31:43.865096 1220421 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:31:43.865105 1220421 fix.go:54] fixHost starting: 
	I0731 23:31:43.865517 1220421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:43.865574 1220421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:43.884536 1220421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0731 23:31:43.885167 1220421 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:43.885716 1220421 main.go:141] libmachine: Using API Version  1
	I0731 23:31:43.885741 1220421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:43.886153 1220421 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:43.886367 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.886587 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetState
	I0731 23:31:43.888918 1220421 fix.go:112] recreateIfNeeded on running-upgrade-524949: state=Running err=<nil>
	W0731 23:31:43.888980 1220421 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:31:43.890627 1220421 out.go:177] * Updating the running kvm2 "running-upgrade-524949" VM ...
	I0731 23:31:44.640379 1219947 pod_ready.go:102] pod "etcd-pause-343154" in "kube-system" namespace has status "Ready":"False"
	I0731 23:31:46.637457 1219947 pod_ready.go:92] pod "etcd-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:46.637501 1219947 pod_ready.go:81] duration metric: took 10.007431516s for pod "etcd-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:46.637518 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.145449 1219947 pod_ready.go:92] pod "kube-apiserver-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.145492 1219947 pod_ready.go:81] duration metric: took 507.964768ms for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.145510 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.152305 1219947 pod_ready.go:92] pod "kube-controller-manager-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.152335 1219947 pod_ready.go:81] duration metric: took 6.816979ms for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.152346 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.158008 1219947 pod_ready.go:92] pod "kube-proxy-262z4" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.158040 1219947 pod_ready.go:81] duration metric: took 5.687461ms for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.158049 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.163935 1219947 pod_ready.go:92] pod "kube-scheduler-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.163969 1219947 pod_ready.go:81] duration metric: took 5.911661ms for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.163981 1219947 pod_ready.go:38] duration metric: took 12.547926117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:31:47.164002 1219947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:31:47.178430 1219947 ops.go:34] apiserver oom_adj: -16
	I0731 23:31:47.178517 1219947 kubeadm.go:597] duration metric: took 22.534867043s to restartPrimaryControlPlane
	I0731 23:31:47.178534 1219947 kubeadm.go:394] duration metric: took 22.684195818s to StartCluster
	I0731 23:31:47.178560 1219947 settings.go:142] acquiring lock: {Name:mk076897bfd1af81579aafbccfd5a932e011b343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:31:47.178662 1219947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:31:47.179530 1219947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:31:47.179842 1219947 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:31:47.179968 1219947 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:31:47.180181 1219947 config.go:182] Loaded profile config "pause-343154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:31:47.182250 1219947 out.go:177] * Verifying Kubernetes components...
	I0731 23:31:47.182250 1219947 out.go:177] * Enabled addons: 
	I0731 23:31:43.632662 1220182 main.go:141] libmachine: (stopped-upgrade-692279) Calling .GetIP
	I0731 23:31:43.636458 1220182 main.go:141] libmachine: (stopped-upgrade-692279) DBG | domain stopped-upgrade-692279 has defined MAC address 52:54:00:c6:f1:df in network mk-stopped-upgrade-692279
	I0731 23:31:43.637030 1220182 main.go:141] libmachine: (stopped-upgrade-692279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f1:df", ip: ""} in network mk-stopped-upgrade-692279: {Iface:virbr4 ExpiryTime:2024-08-01 00:31:33 +0000 UTC Type:0 Mac:52:54:00:c6:f1:df Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:stopped-upgrade-692279 Clientid:01:52:54:00:c6:f1:df}
	I0731 23:31:43.637071 1220182 main.go:141] libmachine: (stopped-upgrade-692279) DBG | domain stopped-upgrade-692279 has defined IP address 192.168.72.120 and MAC address 52:54:00:c6:f1:df in network mk-stopped-upgrade-692279
	I0731 23:31:43.637518 1220182 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 23:31:43.642443 1220182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:31:43.657238 1220182 kubeadm.go:883] updating cluster {Name:stopped-upgrade-692279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-692279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 23:31:43.657397 1220182 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0731 23:31:43.657463 1220182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:31:43.700506 1220182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0731 23:31:43.700596 1220182 ssh_runner.go:195] Run: which lz4
	I0731 23:31:43.705266 1220182 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 23:31:43.710602 1220182 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:31:43.710641 1220182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0731 23:31:45.203813 1220182 crio.go:462] duration metric: took 1.498584224s to copy over tarball
	I0731 23:31:45.203905 1220182 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 23:31:47.183744 1219947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:47.183743 1219947 addons.go:510] duration metric: took 3.777279ms for enable addons: enabled=[]
	I0731 23:31:47.360014 1219947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:31:47.375419 1219947 node_ready.go:35] waiting up to 6m0s for node "pause-343154" to be "Ready" ...
	I0731 23:31:47.379333 1219947 node_ready.go:49] node "pause-343154" has status "Ready":"True"
	I0731 23:31:47.379363 1219947 node_ready.go:38] duration metric: took 3.900991ms for node "pause-343154" to be "Ready" ...
	I0731 23:31:47.379376 1219947 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:31:47.436503 1219947 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v29v8" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.834834 1219947 pod_ready.go:92] pod "coredns-7db6d8ff4d-v29v8" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.834870 1219947 pod_ready.go:81] duration metric: took 398.328225ms for pod "coredns-7db6d8ff4d-v29v8" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.834898 1219947 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:48.234490 1219947 pod_ready.go:92] pod "etcd-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:48.234527 1219947 pod_ready.go:81] duration metric: took 399.62001ms for pod "etcd-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:48.234543 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:43.891767 1220421 machine.go:94] provisionDockerMachine start ...
	I0731 23:31:43.891816 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.892197 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:43.895770 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:43.896553 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:43.896591 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:43.896805 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:43.897044 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:43.897243 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:43.897545 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:43.897793 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:43.898056 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:43.898079 1220421 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:31:44.028707 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-524949
	
	I0731 23:31:44.028751 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetMachineName
	I0731 23:31:44.029082 1220421 buildroot.go:166] provisioning hostname "running-upgrade-524949"
	I0731 23:31:44.029119 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetMachineName
	I0731 23:31:44.029358 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.033908 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.034291 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.034420 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.034850 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.035128 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.035349 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.035537 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.035744 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:44.035993 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:44.036010 1220421 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-524949 && echo "running-upgrade-524949" | sudo tee /etc/hostname
	I0731 23:31:44.195355 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-524949
	
	I0731 23:31:44.195392 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.199206 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.199600 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.199629 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.199994 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.200251 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.200426 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.200613 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.200836 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:44.201077 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:44.201102 1220421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-524949' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-524949/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-524949' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:31:44.334345 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:31:44.334382 1220421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:31:44.334408 1220421 buildroot.go:174] setting up certificates
	I0731 23:31:44.334421 1220421 provision.go:84] configureAuth start
	I0731 23:31:44.334435 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetMachineName
	I0731 23:31:44.334733 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetIP
	I0731 23:31:44.338647 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.339417 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.339454 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.339701 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.342890 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.343450 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.343494 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.343595 1220421 provision.go:143] copyHostCerts
	I0731 23:31:44.343667 1220421 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:31:44.343682 1220421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:31:44.343759 1220421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:31:44.343893 1220421 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:31:44.343906 1220421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:31:44.343937 1220421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:31:44.344018 1220421 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:31:44.344036 1220421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:31:44.344075 1220421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:31:44.344184 1220421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-524949 san=[127.0.0.1 192.168.83.53 localhost minikube running-upgrade-524949]
	I0731 23:31:44.696873 1220421 provision.go:177] copyRemoteCerts
	I0731 23:31:44.696966 1220421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:31:44.697007 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.703467 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.703944 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.703985 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.704363 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.704635 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.704855 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.705077 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	I0731 23:31:44.811938 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 23:31:44.847572 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:31:44.880161 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:31:44.908491 1220421 provision.go:87] duration metric: took 574.05434ms to configureAuth
	I0731 23:31:44.908529 1220421 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:31:44.908766 1220421 config.go:182] Loaded profile config "running-upgrade-524949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 23:31:44.908878 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.912232 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.912674 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.912714 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.913081 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.913307 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.913497 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.913653 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.913883 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:44.914143 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:44.914169 1220421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:31:45.485014 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:31:45.485049 1220421 machine.go:97] duration metric: took 1.593251473s to provisionDockerMachine
	I0731 23:31:45.485062 1220421 start.go:293] postStartSetup for "running-upgrade-524949" (driver="kvm2")
	I0731 23:31:45.485073 1220421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:31:45.485096 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.485493 1220421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:31:45.485534 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.488479 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.488921 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.488961 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.489139 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.489385 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.489548 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.489686 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	I0731 23:31:45.581494 1220421 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:31:45.585885 1220421 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 23:31:45.585924 1220421 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:31:45.586008 1220421 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:31:45.586116 1220421 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:31:45.586239 1220421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:31:45.596178 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:31:45.633418 1220421 start.go:296] duration metric: took 148.336539ms for postStartSetup
	I0731 23:31:45.633465 1220421 fix.go:56] duration metric: took 1.768361154s for fixHost
	I0731 23:31:45.633505 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.636928 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.637387 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.637422 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.637561 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.637810 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.638043 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.638214 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.638394 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:45.638624 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:45.638638 1220421 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:31:45.761904 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468705.753875573
	
	I0731 23:31:45.761938 1220421 fix.go:216] guest clock: 1722468705.753875573
	I0731 23:31:45.761962 1220421 fix.go:229] Guest: 2024-07-31 23:31:45.753875573 +0000 UTC Remote: 2024-07-31 23:31:45.633475999 +0000 UTC m=+1.962900402 (delta=120.399574ms)
	I0731 23:31:45.762013 1220421 fix.go:200] guest clock delta is within tolerance: 120.399574ms
	I0731 23:31:45.762020 1220421 start.go:83] releasing machines lock for "running-upgrade-524949", held for 1.896934813s
	I0731 23:31:45.762048 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.762380 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetIP
	I0731 23:31:45.766279 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.766855 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.766897 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.767263 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.767930 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.768198 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.768292 1220421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:31:45.768352 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.768428 1220421 ssh_runner.go:195] Run: cat /version.json
	I0731 23:31:45.768467 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.771671 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.772083 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.772139 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.772446 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.772648 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.772790 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.772837 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.772948 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	I0731 23:31:45.773340 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.773362 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.773555 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.773808 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.773998 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.774167 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	W0731 23:31:45.879906 1220421 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 23:31:45.880007 1220421 ssh_runner.go:195] Run: systemctl --version
	I0731 23:31:45.886274 1220421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:31:46.029923 1220421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 23:31:46.037547 1220421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:31:46.037642 1220421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:31:46.057973 1220421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:31:46.058011 1220421 start.go:495] detecting cgroup driver to use...
	I0731 23:31:46.058097 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:31:46.074491 1220421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:31:46.091435 1220421 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:31:46.091511 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:31:46.104217 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:31:46.119834 1220421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:31:46.266446 1220421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:31:46.414373 1220421 docker.go:233] disabling docker service ...
	I0731 23:31:46.414476 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:31:46.433096 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:31:46.450724 1220421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:31:46.621499 1220421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:31:46.795328 1220421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:31:46.808825 1220421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:31:46.831014 1220421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 23:31:46.831113 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.842286 1220421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:31:46.842369 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.852698 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.864400 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.876292 1220421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:31:46.890064 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.899827 1220421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.917714 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.929234 1220421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:31:46.940559 1220421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:31:46.951679 1220421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:47.158479 1220421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:31:46.759805 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:31:46.760115 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:31:46.760127 1218502 kubeadm.go:310] 
	I0731 23:31:46.760177 1218502 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 23:31:46.760229 1218502 kubeadm.go:310] 		timed out waiting for the condition
	I0731 23:31:46.760236 1218502 kubeadm.go:310] 
	I0731 23:31:46.760276 1218502 kubeadm.go:310] 	This error is likely caused by:
	I0731 23:31:46.760321 1218502 kubeadm.go:310] 		- The kubelet is not running
	I0731 23:31:46.760447 1218502 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 23:31:46.760455 1218502 kubeadm.go:310] 
	I0731 23:31:46.760550 1218502 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 23:31:46.760577 1218502 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 23:31:46.760603 1218502 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 23:31:46.760607 1218502 kubeadm.go:310] 
	I0731 23:31:46.760697 1218502 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 23:31:46.760767 1218502 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 23:31:46.760772 1218502 kubeadm.go:310] 
	I0731 23:31:46.760869 1218502 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 23:31:46.760968 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 23:31:46.761047 1218502 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 23:31:46.761106 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 23:31:46.761109 1218502 kubeadm.go:310] 
	I0731 23:31:46.761955 1218502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:31:46.762078 1218502 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 23:31:46.762159 1218502 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 23:31:46.762332 1218502 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 23:31:46.762394 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 23:31:47.361436 1218502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:31:47.377093 1218502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:31:47.391110 1218502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:31:47.391138 1218502 kubeadm.go:157] found existing configuration files:
	
	I0731 23:31:47.391204 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:31:47.403341 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:31:47.403456 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:31:47.416875 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:31:47.429337 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:31:47.429423 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:31:47.442697 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:31:47.455492 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:31:47.455588 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:31:47.468044 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:31:47.480083 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:31:47.480199 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:31:47.495474 1218502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 23:31:47.745355 1218502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:31:48.634588 1219947 pod_ready.go:92] pod "kube-apiserver-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:48.634615 1219947 pod_ready.go:81] duration metric: took 400.061905ms for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:48.634630 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.035008 1219947 pod_ready.go:92] pod "kube-controller-manager-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:49.035046 1219947 pod_ready.go:81] duration metric: took 400.406919ms for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.035061 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.709403 1219947 pod_ready.go:92] pod "kube-proxy-262z4" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:49.709429 1219947 pod_ready.go:81] duration metric: took 674.360831ms for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.709441 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.833896 1219947 pod_ready.go:92] pod "kube-scheduler-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:49.833923 1219947 pod_ready.go:81] duration metric: took 124.474838ms for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.833932 1219947 pod_ready.go:38] duration metric: took 2.454544512s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:31:49.833962 1219947 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:31:49.834036 1219947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:31:49.848669 1219947 api_server.go:72] duration metric: took 2.668777827s to wait for apiserver process to appear ...
	I0731 23:31:49.848708 1219947 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:31:49.848738 1219947 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I0731 23:31:49.854092 1219947 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I0731 23:31:49.855097 1219947 api_server.go:141] control plane version: v1.30.3
	I0731 23:31:49.855126 1219947 api_server.go:131] duration metric: took 6.407554ms to wait for apiserver health ...
	I0731 23:31:49.855136 1219947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:31:50.036309 1219947 system_pods.go:59] 6 kube-system pods found
	I0731 23:31:50.036343 1219947 system_pods.go:61] "coredns-7db6d8ff4d-v29v8" [247719d4-4db6-4e42-aa5b-ee65d12de302] Running
	I0731 23:31:50.036348 1219947 system_pods.go:61] "etcd-pause-343154" [d7a2576d-d2f7-4cd2-96f2-1f78a32a859c] Running
	I0731 23:31:50.036352 1219947 system_pods.go:61] "kube-apiserver-pause-343154" [971e0137-ea8a-4b9d-83b3-14ce1308ee93] Running
	I0731 23:31:50.036355 1219947 system_pods.go:61] "kube-controller-manager-pause-343154" [19ba4fb3-bb6e-43f6-a1cd-7d8aef4daae1] Running
	I0731 23:31:50.036358 1219947 system_pods.go:61] "kube-proxy-262z4" [17405d1b-40da-4fdf-ae4e-0730d3737150] Running
	I0731 23:31:50.036361 1219947 system_pods.go:61] "kube-scheduler-pause-343154" [b8e8b790-47e3-450b-ad71-bd1d3c9001d2] Running
	I0731 23:31:50.036366 1219947 system_pods.go:74] duration metric: took 181.224658ms to wait for pod list to return data ...
	I0731 23:31:50.036374 1219947 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:31:50.233918 1219947 default_sa.go:45] found service account: "default"
	I0731 23:31:50.233949 1219947 default_sa.go:55] duration metric: took 197.568227ms for default service account to be created ...
	I0731 23:31:50.233959 1219947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:31:50.435696 1219947 system_pods.go:86] 6 kube-system pods found
	I0731 23:31:50.435731 1219947 system_pods.go:89] "coredns-7db6d8ff4d-v29v8" [247719d4-4db6-4e42-aa5b-ee65d12de302] Running
	I0731 23:31:50.435736 1219947 system_pods.go:89] "etcd-pause-343154" [d7a2576d-d2f7-4cd2-96f2-1f78a32a859c] Running
	I0731 23:31:50.435740 1219947 system_pods.go:89] "kube-apiserver-pause-343154" [971e0137-ea8a-4b9d-83b3-14ce1308ee93] Running
	I0731 23:31:50.435745 1219947 system_pods.go:89] "kube-controller-manager-pause-343154" [19ba4fb3-bb6e-43f6-a1cd-7d8aef4daae1] Running
	I0731 23:31:50.435748 1219947 system_pods.go:89] "kube-proxy-262z4" [17405d1b-40da-4fdf-ae4e-0730d3737150] Running
	I0731 23:31:50.435752 1219947 system_pods.go:89] "kube-scheduler-pause-343154" [b8e8b790-47e3-450b-ad71-bd1d3c9001d2] Running
	I0731 23:31:50.435759 1219947 system_pods.go:126] duration metric: took 201.794648ms to wait for k8s-apps to be running ...
	I0731 23:31:50.435771 1219947 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:31:50.435822 1219947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:31:50.450675 1219947 system_svc.go:56] duration metric: took 14.888301ms WaitForService to wait for kubelet
	I0731 23:31:50.450714 1219947 kubeadm.go:582] duration metric: took 3.270832431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:31:50.450734 1219947 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:31:50.634591 1219947 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:31:50.634628 1219947 node_conditions.go:123] node cpu capacity is 2
	I0731 23:31:50.634640 1219947 node_conditions.go:105] duration metric: took 183.901344ms to run NodePressure ...
	I0731 23:31:50.634651 1219947 start.go:241] waiting for startup goroutines ...
	I0731 23:31:50.634658 1219947 start.go:246] waiting for cluster config update ...
	I0731 23:31:50.634665 1219947 start.go:255] writing updated cluster config ...
	I0731 23:31:50.743002 1219947 ssh_runner.go:195] Run: rm -f paused
	I0731 23:31:50.796583 1219947 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 23:31:51.066459 1219947 out.go:177] * Done! kubectl is now configured to use "pause-343154" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.026367296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468712026338256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d478ef16-7b14-481a-a4d4-3e5843a67f71 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.027052672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e07e98d-86c2-4111-8d0b-a12769237e7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.027118951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e07e98d-86c2-4111-8d0b-a12769237e7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.027428393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e07e98d-86c2-4111-8d0b-a12769237e7c name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.077938987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d4c11e2e-0e18-4e9b-97ea-b747f3f64ec8 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.078030236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d4c11e2e-0e18-4e9b-97ea-b747f3f64ec8 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.079553626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1e911b4-f212-4071-9c69-95db27686963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.080028811Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468712080002571,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1e911b4-f212-4071-9c69-95db27686963 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.080649237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62cbe56e-3580-420f-9dcf-527491a2da31 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.080763139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62cbe56e-3580-420f-9dcf-527491a2da31 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.081040859Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62cbe56e-3580-420f-9dcf-527491a2da31 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.124267046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d5dad23-31bf-4336-8c13-f798be823373 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.124350898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d5dad23-31bf-4336-8c13-f798be823373 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.125750295Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0dab96e-db44-436b-8a0e-fbdd93e62638 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.126452688Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468712126422802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0dab96e-db44-436b-8a0e-fbdd93e62638 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.127043749Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1d42ce6-b4e2-41ab-b371-377f14f465da name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.127115891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1d42ce6-b4e2-41ab-b371-377f14f465da name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.127546039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1d42ce6-b4e2-41ab-b371-377f14f465da name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.185713745Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ba17213-c65d-4b4b-8c0b-0065c609de27 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.185803360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ba17213-c65d-4b4b-8c0b-0065c609de27 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.188347852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea1366a2-d00c-4d1d-bb0f-63fe098aa2bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.188945077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468712188911460,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea1366a2-d00c-4d1d-bb0f-63fe098aa2bc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.189882234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb7597ed-1ee2-4b7e-b7ef-7722b1daef1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.189958468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb7597ed-1ee2-4b7e-b7ef-7722b1daef1d name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:52 pause-343154 crio[2272]: time="2024-07-31 23:31:52.193809610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb7597ed-1ee2-4b7e-b7ef-7722b1daef1d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0eee0a568fc10       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 seconds ago       Running             coredns                   1                   a0142d52dda7a       coredns-7db6d8ff4d-v29v8
	1831912aa22ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago       Running             kube-proxy                2                   24dcf66a7c4ee       kube-proxy-262z4
	0d0e402e1f5c2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   22 seconds ago       Running             kube-apiserver            1                   922012a74e0e4       kube-apiserver-pause-343154
	c5828bbebd370       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   22 seconds ago       Running             kube-controller-manager   1                   5b1321b988fef       kube-controller-manager-pause-343154
	1bfb2d384a50d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   3f46b064e0325       etcd-pause-343154
	cb247f23ee3bf       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   25 seconds ago       Running             kube-scheduler            1                   e044eee342221       kube-scheduler-pause-343154
	0a6163dc45dca       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   27 seconds ago       Exited              kube-proxy                1                   24dcf66a7c4ee       kube-proxy-262z4
	89a3312666995       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   28 seconds ago       Exited              etcd                      1                   3f46b064e0325       etcd-pause-343154
	4c9d4617012ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   52 seconds ago       Exited              coredns                   0                   cb0df8747f748       coredns-7db6d8ff4d-v29v8
	1c50233c020c9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   About a minute ago   Exited              kube-scheduler            0                   274cdf3076959       kube-scheduler-pause-343154
	240c60cf8d397       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Exited              kube-controller-manager   0                   78688a7446259       kube-controller-manager-pause-343154
	a774d18a7254f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   About a minute ago   Exited              kube-apiserver            0                   66735aa1c7670       kube-apiserver-pause-343154
	
	
	==> coredns [0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59063 - 54903 "HINFO IN 2354539187072714327.8093630560496879967. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03842797s
	
	
	==> coredns [4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51612 - 11218 "HINFO IN 2228504331175536696.7769828862450269. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.061741113s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-343154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-343154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=pause-343154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_30_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-343154
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.235
	  Hostname:    pause-343154
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ff36ba5e9be45d5b49993e7a22cb716
	  System UUID:                0ff36ba5-e9be-45d5-b499-93e7a22cb716
	  Boot ID:                    cec5f905-52ec-45ce-9596-90127bbdc0f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-v29v8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     55s
	  kube-system                 etcd-pause-343154                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         68s
	  kube-system                 kube-apiserver-pause-343154             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-pause-343154    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-proxy-262z4                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-scheduler-pause-343154             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node pause-343154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node pause-343154 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node pause-343154 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    68s                kubelet          Node pause-343154 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  68s                kubelet          Node pause-343154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     68s                kubelet          Node pause-343154 status is now: NodeHasSufficientPID
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeReady                67s                kubelet          Node pause-343154 status is now: NodeReady
	  Normal  RegisteredNode           55s                node-controller  Node pause-343154 event: Registered Node pause-343154 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-343154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-343154 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-343154 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                 node-controller  Node pause-343154 event: Registered Node pause-343154 in Controller
	
	
	==> dmesg <==
	[  +8.108258] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.132140] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.179640] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.161251] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.298559] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.640115] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.061020] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.462455] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.595757] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.042483] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.104554] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.884676] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +0.155911] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 23:31] systemd-fstab-generator[2140]: Ignoring "noauto" option for root device
	[  +0.086939] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.064128] systemd-fstab-generator[2152]: Ignoring "noauto" option for root device
	[  +0.188886] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.153243] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.329155] systemd-fstab-generator[2207]: Ignoring "noauto" option for root device
	[  +1.057613] systemd-fstab-generator[2392]: Ignoring "noauto" option for root device
	[  +3.218580] kauditd_printk_skb: 158 callbacks suppressed
	[  +1.560244] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +4.531495] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.795121] kauditd_printk_skb: 20 callbacks suppressed
	[  +1.467682] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	
	
	==> etcd [1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305] <==
	{"level":"info","ts":"2024-07-31T23:31:29.349797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:29.349843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:29.357423Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:31:29.357992Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-07-31T23:31:29.358117Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-07-31T23:31:29.361705Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:31:29.361634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:31:30.830252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:30.830324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:30.830376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgPreVoteResp from 5c9ce5d2cd86398f at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:30.830395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.830403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgVoteResp from 5c9ce5d2cd86398f at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.830414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.830426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5c9ce5d2cd86398f elected leader 5c9ce5d2cd86398f at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.833069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5c9ce5d2cd86398f","local-member-attributes":"{Name:pause-343154 ClientURLs:[https://192.168.61.235:2379]}","request-path":"/0/members/5c9ce5d2cd86398f/attributes","cluster-id":"d507c5522fd9f0c3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:31:30.833128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:31:30.833344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:31:30.833362Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:31:30.83339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:31:30.836476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.235:2379"}
	{"level":"info","ts":"2024-07-31T23:31:30.836641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-31T23:31:49.695748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.016952ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4147693258839274315 > lease_revoke:<id:398f910b2265b220>","response":"size:28"}
	{"level":"warn","ts":"2024-07-31T23:31:49.696131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.695723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-343154\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-07-31T23:31:49.695966Z","caller":"traceutil/trace.go:171","msg":"trace[351586450] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"273.500473ms","start":"2024-07-31T23:31:49.42245Z","end":"2024-07-31T23:31:49.695951Z","steps":["trace[351586450] 'read index received'  (duration: 13.09142ms)","trace[351586450] 'applied index is now lower than readState.Index'  (duration: 260.407755ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:31:49.696184Z","caller":"traceutil/trace.go:171","msg":"trace[200012709] range","detail":"{range_begin:/registry/minions/pause-343154; range_end:; response_count:1; response_revision:493; }","duration":"273.789832ms","start":"2024-07-31T23:31:49.42238Z","end":"2024-07-31T23:31:49.69617Z","steps":["trace[200012709] 'agreement among raft nodes before linearized reading'  (duration: 273.674148ms)"],"step_count":1}
	
	
	==> etcd [89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254] <==
	{"level":"info","ts":"2024-07-31T23:31:24.507609Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.77794ms"}
	{"level":"info","ts":"2024-07-31T23:31:24.52238Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-31T23:31:24.534894Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","commit-index":437}
	{"level":"info","ts":"2024-07-31T23:31:24.539852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-31T23:31:24.539929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became follower at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:24.539973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5c9ce5d2cd86398f [peers: [], term: 2, commit: 437, applied: 0, lastindex: 437, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-31T23:31:24.719558Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-31T23:31:24.965363Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":420}
	{"level":"info","ts":"2024-07-31T23:31:25.244838Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-31T23:31:25.37621Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"5c9ce5d2cd86398f","timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:31:25.376469Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"5c9ce5d2cd86398f"}
	{"level":"info","ts":"2024-07-31T23:31:25.376526Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"5c9ce5d2cd86398f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T23:31:25.376824Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T23:31:25.37703Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:31:25.378123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=(6673461441410251151)"}
	{"level":"info","ts":"2024-07-31T23:31:25.378368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","added-peer-id":"5c9ce5d2cd86398f","added-peer-peer-urls":["https://192.168.61.235:2380"]}
	{"level":"info","ts":"2024-07-31T23:31:25.378913Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:25.379001Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:25.377143Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:31:25.379631Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:31:25.381163Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:31:25.381384Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:31:25.381452Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:31:25.381565Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-07-31T23:31:25.381588Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.235:2380"}
	
	
	==> kernel <==
	 23:31:52 up 1 min,  0 users,  load average: 1.37, 0.52, 0.19
	Linux pause-343154 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78] <==
	I0731 23:31:32.569971       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 23:31:32.649635       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 23:31:32.649795       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:31:32.652071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:31:32.660225       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 23:31:32.660279       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 23:31:32.662468       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 23:31:32.662543       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 23:31:32.662650       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 23:31:32.664462       1 aggregator.go:165] initial CRD sync complete...
	I0731 23:31:32.664505       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 23:31:32.664514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 23:31:32.664522       1 cache.go:39] Caches are synced for autoregister controller
	I0731 23:31:32.664560       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 23:31:32.664474       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:31:32.664744       1 policy_source.go:224] refreshing policies
	I0731 23:31:32.702772       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:31:33.461643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:31:34.451660       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:31:34.471288       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:31:34.527974       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:31:34.574765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:31:34.586893       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:31:45.739739       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 23:31:45.937715       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54] <==
	I0731 23:30:41.681037       1 controller.go:615] quota admission added evaluator for: namespaces
	E0731 23:30:41.693238       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0731 23:30:41.839517       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:30:42.471496       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 23:30:42.476468       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 23:30:42.476497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:30:43.242141       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:30:43.303740       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:30:43.413471       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 23:30:43.421912       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.235]
	I0731 23:30:43.423049       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:30:43.432010       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 23:30:43.587320       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:30:44.502206       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:30:44.521822       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 23:30:44.537127       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:30:57.632634       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 23:30:57.857287       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0731 23:31:15.526253       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0731 23:31:15.547426       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.547506       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.548464       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.549237       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.549392       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.549478       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc] <==
	I0731 23:30:57.741967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-343154"
	I0731 23:30:57.742062       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 23:30:57.742125       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0731 23:30:57.742561       1 shared_informer.go:320] Caches are synced for GC
	I0731 23:30:57.738170       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 23:30:57.784456       1 shared_informer.go:320] Caches are synced for disruption
	I0731 23:30:57.793483       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 23:30:57.812775       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 23:30:57.823621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="173.765325ms"
	I0731 23:30:57.845975       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:30:57.846253       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:30:57.851961       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.190635ms"
	I0731 23:30:57.852107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.361µs"
	I0731 23:30:57.852194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.987µs"
	I0731 23:30:58.317285       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:30:58.335033       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:30:58.335093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 23:30:58.889327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.057296ms"
	I0731 23:30:58.910358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.533254ms"
	I0731 23:30:58.912965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="98.88µs"
	I0731 23:30:58.916606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.664µs"
	I0731 23:30:58.941723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.083µs"
	I0731 23:31:00.839519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="137.779µs"
	I0731 23:31:02.057396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="270.302613ms"
	I0731 23:31:02.057912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.156µs"
	
	
	==> kube-controller-manager [c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842] <==
	I0731 23:31:45.734815       1 shared_informer.go:320] Caches are synced for TTL
	I0731 23:31:45.734935       1 shared_informer.go:320] Caches are synced for HPA
	I0731 23:31:45.735254       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 23:31:45.736439       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 23:31:45.740068       1 shared_informer.go:320] Caches are synced for PV protection
	I0731 23:31:45.742467       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 23:31:45.745133       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 23:31:45.750128       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 23:31:45.756119       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 23:31:45.763666       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 23:31:45.765044       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 23:31:45.768496       1 shared_informer.go:320] Caches are synced for GC
	I0731 23:31:45.815483       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 23:31:45.819845       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 23:31:45.820321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="322.63µs"
	I0731 23:31:45.834629       1 shared_informer.go:320] Caches are synced for deployment
	I0731 23:31:45.838147       1 shared_informer.go:320] Caches are synced for disruption
	I0731 23:31:45.852856       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:31:45.880189       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:31:45.913248       1 shared_informer.go:320] Caches are synced for namespace
	I0731 23:31:45.926022       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 23:31:45.934780       1 shared_informer.go:320] Caches are synced for service account
	I0731 23:31:46.388199       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:31:46.435217       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:31:46.435318       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e] <==
	I0731 23:31:24.643232       1 server_linux.go:69] "Using iptables proxy"
	E0731 23:31:24.658067       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-343154\": dial tcp 192.168.61.235:8443: connect: connection refused"
	
	
	==> kube-proxy [1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e] <==
	I0731 23:31:33.168664       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:31:33.184751       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.235"]
	I0731 23:31:33.256254       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:31:33.256349       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:31:33.256368       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:31:33.261888       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:31:33.262099       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:31:33.262117       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:31:33.264367       1 config.go:192] "Starting service config controller"
	I0731 23:31:33.264382       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:31:33.264406       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:31:33.264411       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:31:33.270440       1 config.go:319] "Starting node config controller"
	I0731 23:31:33.270456       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:31:33.365100       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:31:33.365168       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:31:33.370648       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199] <==
	E0731 23:30:41.599580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:30:41.602283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:30:41.602581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:30:42.457290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:30:42.457417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:30:42.493545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:30:42.493591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:30:42.626784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:30:42.627763       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:30:42.716852       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 23:30:42.716918       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:30:42.767394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:30:42.767448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:30:42.804495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:30:42.804555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:30:42.880375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:30:42.880435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:30:42.901587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:30:42.901659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 23:30:42.920915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:30:42.920961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:30:42.921025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:30:42.921050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0731 23:30:45.790340       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 23:31:15.531148       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca] <==
	W0731 23:31:32.557200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:31:32.557234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:31:32.557308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:31:32.557338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:31:32.557405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 23:31:32.557434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 23:31:32.557505       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:31:32.557535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:31:32.557610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:31:32.557640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:31:32.561868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:31:32.561938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:31:32.562064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 23:31:32.562101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 23:31:32.562192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:31:32.562237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:31:32.562358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:31:32.562404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:31:32.562472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:31:32.562508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:31:32.562570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:31:32.562604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:31:32.562710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:31:32.562770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0731 23:31:32.659773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 23:31:28 pause-343154 kubelet[2806]: E0731 23:31:28.855764    2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-343154?timeout=10s\": dial tcp 192.168.61.235:8443: connect: connection refused" interval="400ms"
	Jul 31 23:31:28 pause-343154 kubelet[2806]: I0731 23:31:28.954645    2806 kubelet_node_status.go:73] "Attempting to register node" node="pause-343154"
	Jul 31 23:31:28 pause-343154 kubelet[2806]: E0731 23:31:28.956663    2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.235:8443: connect: connection refused" node="pause-343154"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.027202    2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.235:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-343154.17e7702572a7ba16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-343154,UID:pause-343154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-343154,},FirstTimestamp:2024-07-31 23:31:28.632199702 +0000 UTC m=+0.102585905,LastTimestamp:2024-07-31 23:31:28.632199702 +0000 UTC m=+0.102585905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-343154,}"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: I0731 23:31:29.116927    2806 scope.go:117] "RemoveContainer" containerID="89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.257286    2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-343154?timeout=10s\": dial tcp 192.168.61.235:8443: connect: connection refused" interval="800ms"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: I0731 23:31:29.359986    2806 kubelet_node_status.go:73] "Attempting to register node" node="pause-343154"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.361128    2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.235:8443: connect: connection refused" node="pause-343154"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: W0731 23:31:29.534292    2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.534364    2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:29 pause-343154 kubelet[2806]: W0731 23:31:29.539449    2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.539533    2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:30 pause-343154 kubelet[2806]: I0731 23:31:30.163530    2806 kubelet_node_status.go:73] "Attempting to register node" node="pause-343154"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.632329    2806 apiserver.go:52] "Watching apiserver"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.636791    2806 topology_manager.go:215] "Topology Admit Handler" podUID="247719d4-4db6-4e42-aa5b-ee65d12de302" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v29v8"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.637206    2806 topology_manager.go:215] "Topology Admit Handler" podUID="17405d1b-40da-4fdf-ae4e-0730d3737150" podNamespace="kube-system" podName="kube-proxy-262z4"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.649222    2806 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.704813    2806 kubelet_node_status.go:112] "Node was previously registered" node="pause-343154"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.705130    2806 kubelet_node_status.go:76] "Successfully registered node" node="pause-343154"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.705489    2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17405d1b-40da-4fdf-ae4e-0730d3737150-xtables-lock\") pod \"kube-proxy-262z4\" (UID: \"17405d1b-40da-4fdf-ae4e-0730d3737150\") " pod="kube-system/kube-proxy-262z4"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.705612    2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17405d1b-40da-4fdf-ae4e-0730d3737150-lib-modules\") pod \"kube-proxy-262z4\" (UID: \"17405d1b-40da-4fdf-ae4e-0730d3737150\") " pod="kube-system/kube-proxy-262z4"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.712053    2806 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.714413    2806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.938370    2806 scope.go:117] "RemoveContainer" containerID="0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e"
	Jul 31 23:31:36 pause-343154 kubelet[2806]: I0731 23:31:36.106080    2806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-343154 -n pause-343154
helpers_test.go:261: (dbg) Run:  kubectl --context pause-343154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-343154 -n pause-343154
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-343154 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-343154 logs -n 25: (1.674975084s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p test-preload-931367         | test-preload-931367       | jenkins | v1.33.1 | 31 Jul 24 23:26 UTC | 31 Jul 24 23:27 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| image   | test-preload-931367 image list | test-preload-931367       | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:27 UTC |
	| delete  | -p test-preload-931367         | test-preload-931367       | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:27 UTC |
	| start   | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:27 UTC | 31 Jul 24 23:28 UTC |
	|         | --memory=2048 --driver=kvm2    |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC | 31 Jul 24 23:28 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:28 UTC | 31 Jul 24 23:28 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-702146       | scheduled-stop-702146     | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:29 UTC |
	| start   | -p pause-343154 --memory=2048  | pause-343154              | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:31 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-329824         | offline-crio-329824       | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:30 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-351764   | kubernetes-upgrade-351764 | jenkins | v1.33.1 | 31 Jul 24 23:29 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-692279      | minikube                  | jenkins | v1.26.0 | 31 Jul 24 23:29 UTC | 31 Jul 24 23:31 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-329824         | offline-crio-329824       | jenkins | v1.33.1 | 31 Jul 24 23:30 UTC | 31 Jul 24 23:30 UTC |
	| start   | -p running-upgrade-524949      | minikube                  | jenkins | v1.26.0 | 31 Jul 24 23:30 UTC | 31 Jul 24 23:31 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| start   | -p pause-343154                | pause-343154              | jenkins | v1.33.1 | 31 Jul 24 23:31 UTC | 31 Jul 24 23:31 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-692279 stop    | minikube                  | jenkins | v1.26.0 | 31 Jul 24 23:31 UTC | 31 Jul 24 23:31 UTC |
	| start   | -p stopped-upgrade-692279      | stopped-upgrade-692279    | jenkins | v1.33.1 | 31 Jul 24 23:31 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-524949      | running-upgrade-524949    | jenkins | v1.33.1 | 31 Jul 24 23:31 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 23:31:43
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 23:31:43.721369 1220421 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:31:43.721523 1220421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:31:43.721538 1220421 out.go:304] Setting ErrFile to fd 2...
	I0731 23:31:43.721545 1220421 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:31:43.721749 1220421 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:31:43.722426 1220421 out.go:298] Setting JSON to false
	I0731 23:31:43.723565 1220421 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":26055,"bootTime":1722442649,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 23:31:43.723645 1220421 start.go:139] virtualization: kvm guest
	I0731 23:31:43.726089 1220421 out.go:177] * [running-upgrade-524949] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 23:31:43.727503 1220421 notify.go:220] Checking for updates...
	I0731 23:31:43.727516 1220421 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 23:31:43.728887 1220421 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 23:31:43.730101 1220421 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:31:43.731299 1220421 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 23:31:43.732419 1220421 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 23:31:43.733624 1220421 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 23:31:43.735188 1220421 config.go:182] Loaded profile config "running-upgrade-524949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 23:31:43.735823 1220421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:43.735900 1220421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:43.755408 1220421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0731 23:31:43.755949 1220421 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:43.756763 1220421 main.go:141] libmachine: Using API Version  1
	I0731 23:31:43.756801 1220421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:43.757284 1220421 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:43.757547 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.759713 1220421 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0731 23:31:43.761062 1220421 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 23:31:43.761578 1220421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:43.761666 1220421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:43.782819 1220421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0731 23:31:43.788588 1220421 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:43.789441 1220421 main.go:141] libmachine: Using API Version  1
	I0731 23:31:43.789473 1220421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:43.789910 1220421 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:43.790142 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.834044 1220421 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 23:31:43.835255 1220421 start.go:297] selected driver: kvm2
	I0731 23:31:43.835285 1220421 start.go:901] validating driver "kvm2" against &{Name:running-upgrade-524949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-524
949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.53 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 23:31:43.835447 1220421 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 23:31:43.836510 1220421 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:31:43.836636 1220421 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 23:31:43.860685 1220421 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 23:31:43.861219 1220421 cni.go:84] Creating CNI manager for ""
	I0731 23:31:43.861237 1220421 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 23:31:43.861298 1220421 start.go:340] cluster config:
	{Name:running-upgrade-524949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-524949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.53 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0731 23:31:43.861447 1220421 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 23:31:43.863117 1220421 out.go:177] * Starting "running-upgrade-524949" primary control-plane node in "running-upgrade-524949" cluster
	I0731 23:31:43.864235 1220421 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0731 23:31:43.864312 1220421 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0731 23:31:43.864327 1220421 cache.go:56] Caching tarball of preloaded images
	I0731 23:31:43.864510 1220421 preload.go:172] Found /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0731 23:31:43.864527 1220421 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0731 23:31:43.864657 1220421 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/running-upgrade-524949/config.json ...
	I0731 23:31:43.864985 1220421 start.go:360] acquireMachinesLock for running-upgrade-524949: {Name:mk8ea089372662c3f258dfd0dc43017a86788566 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0731 23:31:43.865073 1220421 start.go:364] duration metric: took 54.015µs to acquireMachinesLock for "running-upgrade-524949"
	I0731 23:31:43.865096 1220421 start.go:96] Skipping create...Using existing machine configuration
	I0731 23:31:43.865105 1220421 fix.go:54] fixHost starting: 
	I0731 23:31:43.865517 1220421 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:31:43.865574 1220421 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:31:43.884536 1220421 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37597
	I0731 23:31:43.885167 1220421 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:31:43.885716 1220421 main.go:141] libmachine: Using API Version  1
	I0731 23:31:43.885741 1220421 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:31:43.886153 1220421 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:31:43.886367 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.886587 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetState
	I0731 23:31:43.888918 1220421 fix.go:112] recreateIfNeeded on running-upgrade-524949: state=Running err=<nil>
	W0731 23:31:43.888980 1220421 fix.go:138] unexpected machine state, will restart: <nil>
	I0731 23:31:43.890627 1220421 out.go:177] * Updating the running kvm2 "running-upgrade-524949" VM ...
	I0731 23:31:44.640379 1219947 pod_ready.go:102] pod "etcd-pause-343154" in "kube-system" namespace has status "Ready":"False"
	I0731 23:31:46.637457 1219947 pod_ready.go:92] pod "etcd-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:46.637501 1219947 pod_ready.go:81] duration metric: took 10.007431516s for pod "etcd-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:46.637518 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.145449 1219947 pod_ready.go:92] pod "kube-apiserver-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.145492 1219947 pod_ready.go:81] duration metric: took 507.964768ms for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.145510 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.152305 1219947 pod_ready.go:92] pod "kube-controller-manager-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.152335 1219947 pod_ready.go:81] duration metric: took 6.816979ms for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.152346 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.158008 1219947 pod_ready.go:92] pod "kube-proxy-262z4" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.158040 1219947 pod_ready.go:81] duration metric: took 5.687461ms for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.158049 1219947 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.163935 1219947 pod_ready.go:92] pod "kube-scheduler-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.163969 1219947 pod_ready.go:81] duration metric: took 5.911661ms for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.163981 1219947 pod_ready.go:38] duration metric: took 12.547926117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:31:47.164002 1219947 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0731 23:31:47.178430 1219947 ops.go:34] apiserver oom_adj: -16
	I0731 23:31:47.178517 1219947 kubeadm.go:597] duration metric: took 22.534867043s to restartPrimaryControlPlane
	I0731 23:31:47.178534 1219947 kubeadm.go:394] duration metric: took 22.684195818s to StartCluster
	I0731 23:31:47.178560 1219947 settings.go:142] acquiring lock: {Name:mk076897bfd1af81579aafbccfd5a932e011b343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:31:47.178662 1219947 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 23:31:47.179530 1219947 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/kubeconfig: {Name:mk2865fa7a14d2aa7ec2bbf6e970de47767d4a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 23:31:47.179842 1219947 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.235 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0731 23:31:47.179968 1219947 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0731 23:31:47.180181 1219947 config.go:182] Loaded profile config "pause-343154": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:31:47.182250 1219947 out.go:177] * Verifying Kubernetes components...
	I0731 23:31:47.182250 1219947 out.go:177] * Enabled addons: 
	I0731 23:31:43.632662 1220182 main.go:141] libmachine: (stopped-upgrade-692279) Calling .GetIP
	I0731 23:31:43.636458 1220182 main.go:141] libmachine: (stopped-upgrade-692279) DBG | domain stopped-upgrade-692279 has defined MAC address 52:54:00:c6:f1:df in network mk-stopped-upgrade-692279
	I0731 23:31:43.637030 1220182 main.go:141] libmachine: (stopped-upgrade-692279) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f1:df", ip: ""} in network mk-stopped-upgrade-692279: {Iface:virbr4 ExpiryTime:2024-08-01 00:31:33 +0000 UTC Type:0 Mac:52:54:00:c6:f1:df Iaid: IPaddr:192.168.72.120 Prefix:24 Hostname:stopped-upgrade-692279 Clientid:01:52:54:00:c6:f1:df}
	I0731 23:31:43.637071 1220182 main.go:141] libmachine: (stopped-upgrade-692279) DBG | domain stopped-upgrade-692279 has defined IP address 192.168.72.120 and MAC address 52:54:00:c6:f1:df in network mk-stopped-upgrade-692279
	I0731 23:31:43.637518 1220182 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0731 23:31:43.642443 1220182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0731 23:31:43.657238 1220182 kubeadm.go:883] updating cluster {Name:stopped-upgrade-692279 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-692279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.120 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 23:31:43.657397 1220182 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0731 23:31:43.657463 1220182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:31:43.700506 1220182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0731 23:31:43.700596 1220182 ssh_runner.go:195] Run: which lz4
	I0731 23:31:43.705266 1220182 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 23:31:43.710602 1220182 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:31:43.710641 1220182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0731 23:31:45.203813 1220182 crio.go:462] duration metric: took 1.498584224s to copy over tarball
	I0731 23:31:45.203905 1220182 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0731 23:31:47.183744 1219947 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:47.183743 1219947 addons.go:510] duration metric: took 3.777279ms for enable addons: enabled=[]
	I0731 23:31:47.360014 1219947 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0731 23:31:47.375419 1219947 node_ready.go:35] waiting up to 6m0s for node "pause-343154" to be "Ready" ...
	I0731 23:31:47.379333 1219947 node_ready.go:49] node "pause-343154" has status "Ready":"True"
	I0731 23:31:47.379363 1219947 node_ready.go:38] duration metric: took 3.900991ms for node "pause-343154" to be "Ready" ...
	I0731 23:31:47.379376 1219947 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:31:47.436503 1219947 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-v29v8" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.834834 1219947 pod_ready.go:92] pod "coredns-7db6d8ff4d-v29v8" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:47.834870 1219947 pod_ready.go:81] duration metric: took 398.328225ms for pod "coredns-7db6d8ff4d-v29v8" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:47.834898 1219947 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:48.234490 1219947 pod_ready.go:92] pod "etcd-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:48.234527 1219947 pod_ready.go:81] duration metric: took 399.62001ms for pod "etcd-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:48.234543 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:43.891767 1220421 machine.go:94] provisionDockerMachine start ...
	I0731 23:31:43.891816 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:43.892197 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:43.895770 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:43.896553 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:43.896591 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:43.896805 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:43.897044 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:43.897243 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:43.897545 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:43.897793 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:43.898056 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:43.898079 1220421 main.go:141] libmachine: About to run SSH command:
	hostname
	I0731 23:31:44.028707 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-524949
	
	I0731 23:31:44.028751 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetMachineName
	I0731 23:31:44.029082 1220421 buildroot.go:166] provisioning hostname "running-upgrade-524949"
	I0731 23:31:44.029119 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetMachineName
	I0731 23:31:44.029358 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.033908 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.034291 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.034420 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.034850 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.035128 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.035349 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.035537 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.035744 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:44.035993 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:44.036010 1220421 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-524949 && echo "running-upgrade-524949" | sudo tee /etc/hostname
	I0731 23:31:44.195355 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-524949
	
	I0731 23:31:44.195392 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.199206 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.199600 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.199629 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.199994 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.200251 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.200426 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.200613 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.200836 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:44.201077 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:44.201102 1220421 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-524949' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-524949/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-524949' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0731 23:31:44.334345 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0731 23:31:44.334382 1220421 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19312-1172186/.minikube CaCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19312-1172186/.minikube}
	I0731 23:31:44.334408 1220421 buildroot.go:174] setting up certificates
	I0731 23:31:44.334421 1220421 provision.go:84] configureAuth start
	I0731 23:31:44.334435 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetMachineName
	I0731 23:31:44.334733 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetIP
	I0731 23:31:44.338647 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.339417 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.339454 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.339701 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.342890 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.343450 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.343494 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.343595 1220421 provision.go:143] copyHostCerts
	I0731 23:31:44.343667 1220421 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem, removing ...
	I0731 23:31:44.343682 1220421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem
	I0731 23:31:44.343759 1220421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/ca.pem (1078 bytes)
	I0731 23:31:44.343893 1220421 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem, removing ...
	I0731 23:31:44.343906 1220421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem
	I0731 23:31:44.343937 1220421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/cert.pem (1123 bytes)
	I0731 23:31:44.344018 1220421 exec_runner.go:144] found /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem, removing ...
	I0731 23:31:44.344036 1220421 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem
	I0731 23:31:44.344075 1220421 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19312-1172186/.minikube/key.pem (1675 bytes)
	I0731 23:31:44.344184 1220421 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-524949 san=[127.0.0.1 192.168.83.53 localhost minikube running-upgrade-524949]
	I0731 23:31:44.696873 1220421 provision.go:177] copyRemoteCerts
	I0731 23:31:44.696966 1220421 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0731 23:31:44.697007 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.703467 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.703944 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.703985 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.704363 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.704635 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.704855 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.705077 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	I0731 23:31:44.811938 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0731 23:31:44.847572 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0731 23:31:44.880161 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0731 23:31:44.908491 1220421 provision.go:87] duration metric: took 574.05434ms to configureAuth
	I0731 23:31:44.908529 1220421 buildroot.go:189] setting minikube options for container-runtime
	I0731 23:31:44.908766 1220421 config.go:182] Loaded profile config "running-upgrade-524949": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0731 23:31:44.908878 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:44.912232 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.912674 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:44.912714 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:44.913081 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:44.913307 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.913497 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:44.913653 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:44.913883 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:44.914143 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:44.914169 1220421 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0731 23:31:45.485014 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0731 23:31:45.485049 1220421 machine.go:97] duration metric: took 1.593251473s to provisionDockerMachine
	I0731 23:31:45.485062 1220421 start.go:293] postStartSetup for "running-upgrade-524949" (driver="kvm2")
	I0731 23:31:45.485073 1220421 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0731 23:31:45.485096 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.485493 1220421 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0731 23:31:45.485534 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.488479 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.488921 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.488961 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.489139 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.489385 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.489548 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.489686 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	I0731 23:31:45.581494 1220421 ssh_runner.go:195] Run: cat /etc/os-release
	I0731 23:31:45.585885 1220421 info.go:137] Remote host: Buildroot 2021.02.12
	I0731 23:31:45.585924 1220421 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/addons for local assets ...
	I0731 23:31:45.586008 1220421 filesync.go:126] Scanning /home/jenkins/minikube-integration/19312-1172186/.minikube/files for local assets ...
	I0731 23:31:45.586116 1220421 filesync.go:149] local asset: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem -> 11794002.pem in /etc/ssl/certs
	I0731 23:31:45.586239 1220421 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0731 23:31:45.596178 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/ssl/certs/11794002.pem --> /etc/ssl/certs/11794002.pem (1708 bytes)
	I0731 23:31:45.633418 1220421 start.go:296] duration metric: took 148.336539ms for postStartSetup
	I0731 23:31:45.633465 1220421 fix.go:56] duration metric: took 1.768361154s for fixHost
	I0731 23:31:45.633505 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.636928 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.637387 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.637422 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.637561 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.637810 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.638043 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.638214 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.638394 1220421 main.go:141] libmachine: Using SSH client type: native
	I0731 23:31:45.638624 1220421 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.83.53 22 <nil> <nil>}
	I0731 23:31:45.638638 1220421 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0731 23:31:45.761904 1220421 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722468705.753875573
	
	I0731 23:31:45.761938 1220421 fix.go:216] guest clock: 1722468705.753875573
	I0731 23:31:45.761962 1220421 fix.go:229] Guest: 2024-07-31 23:31:45.753875573 +0000 UTC Remote: 2024-07-31 23:31:45.633475999 +0000 UTC m=+1.962900402 (delta=120.399574ms)
	I0731 23:31:45.762013 1220421 fix.go:200] guest clock delta is within tolerance: 120.399574ms
	I0731 23:31:45.762020 1220421 start.go:83] releasing machines lock for "running-upgrade-524949", held for 1.896934813s
	I0731 23:31:45.762048 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.762380 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetIP
	I0731 23:31:45.766279 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.766855 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.766897 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.767263 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.767930 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.768198 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .DriverName
	I0731 23:31:45.768292 1220421 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0731 23:31:45.768352 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.768428 1220421 ssh_runner.go:195] Run: cat /version.json
	I0731 23:31:45.768467 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHHostname
	I0731 23:31:45.771671 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.772083 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.772139 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.772446 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.772648 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.772790 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.772837 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.772948 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	I0731 23:31:45.773340 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:45.773362 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:45.773555 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHPort
	I0731 23:31:45.773808 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHKeyPath
	I0731 23:31:45.773998 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetSSHUsername
	I0731 23:31:45.774167 1220421 sshutil.go:53] new ssh client: &{IP:192.168.83.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/running-upgrade-524949/id_rsa Username:docker}
	W0731 23:31:45.879906 1220421 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0731 23:31:45.880007 1220421 ssh_runner.go:195] Run: systemctl --version
	I0731 23:31:45.886274 1220421 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0731 23:31:46.029923 1220421 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0731 23:31:46.037547 1220421 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0731 23:31:46.037642 1220421 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0731 23:31:46.057973 1220421 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0731 23:31:46.058011 1220421 start.go:495] detecting cgroup driver to use...
	I0731 23:31:46.058097 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0731 23:31:46.074491 1220421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0731 23:31:46.091435 1220421 docker.go:217] disabling cri-docker service (if available) ...
	I0731 23:31:46.091511 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0731 23:31:46.104217 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0731 23:31:46.119834 1220421 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0731 23:31:46.266446 1220421 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0731 23:31:46.414373 1220421 docker.go:233] disabling docker service ...
	I0731 23:31:46.414476 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0731 23:31:46.433096 1220421 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0731 23:31:46.450724 1220421 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0731 23:31:46.621499 1220421 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0731 23:31:46.795328 1220421 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0731 23:31:46.808825 1220421 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0731 23:31:46.831014 1220421 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0731 23:31:46.831113 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.842286 1220421 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0731 23:31:46.842369 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.852698 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.864400 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.876292 1220421 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0731 23:31:46.890064 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.899827 1220421 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.917714 1220421 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0731 23:31:46.929234 1220421 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0731 23:31:46.940559 1220421 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0731 23:31:46.951679 1220421 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0731 23:31:47.158479 1220421 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0731 23:31:46.759805 1218502 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0731 23:31:46.760115 1218502 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0731 23:31:46.760127 1218502 kubeadm.go:310] 
	I0731 23:31:46.760177 1218502 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0731 23:31:46.760229 1218502 kubeadm.go:310] 		timed out waiting for the condition
	I0731 23:31:46.760236 1218502 kubeadm.go:310] 
	I0731 23:31:46.760276 1218502 kubeadm.go:310] 	This error is likely caused by:
	I0731 23:31:46.760321 1218502 kubeadm.go:310] 		- The kubelet is not running
	I0731 23:31:46.760447 1218502 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0731 23:31:46.760455 1218502 kubeadm.go:310] 
	I0731 23:31:46.760550 1218502 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0731 23:31:46.760577 1218502 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0731 23:31:46.760603 1218502 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0731 23:31:46.760607 1218502 kubeadm.go:310] 
	I0731 23:31:46.760697 1218502 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0731 23:31:46.760767 1218502 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0731 23:31:46.760772 1218502 kubeadm.go:310] 
	I0731 23:31:46.760869 1218502 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0731 23:31:46.760968 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0731 23:31:46.761047 1218502 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0731 23:31:46.761106 1218502 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0731 23:31:46.761109 1218502 kubeadm.go:310] 
	I0731 23:31:46.761955 1218502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:31:46.762078 1218502 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0731 23:31:46.762159 1218502 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0731 23:31:46.762332 1218502 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-351764 localhost] and IPs [192.168.39.228 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0731 23:31:46.762394 1218502 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0731 23:31:47.361436 1218502 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:31:47.377093 1218502 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0731 23:31:47.391110 1218502 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0731 23:31:47.391138 1218502 kubeadm.go:157] found existing configuration files:
	
	I0731 23:31:47.391204 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0731 23:31:47.403341 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0731 23:31:47.403456 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0731 23:31:47.416875 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0731 23:31:47.429337 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0731 23:31:47.429423 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0731 23:31:47.442697 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0731 23:31:47.455492 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0731 23:31:47.455588 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0731 23:31:47.468044 1218502 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0731 23:31:47.480083 1218502 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0731 23:31:47.480199 1218502 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0731 23:31:47.495474 1218502 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0731 23:31:47.745355 1218502 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0731 23:31:48.634588 1219947 pod_ready.go:92] pod "kube-apiserver-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:48.634615 1219947 pod_ready.go:81] duration metric: took 400.061905ms for pod "kube-apiserver-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:48.634630 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.035008 1219947 pod_ready.go:92] pod "kube-controller-manager-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:49.035046 1219947 pod_ready.go:81] duration metric: took 400.406919ms for pod "kube-controller-manager-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.035061 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.709403 1219947 pod_ready.go:92] pod "kube-proxy-262z4" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:49.709429 1219947 pod_ready.go:81] duration metric: took 674.360831ms for pod "kube-proxy-262z4" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.709441 1219947 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.833896 1219947 pod_ready.go:92] pod "kube-scheduler-pause-343154" in "kube-system" namespace has status "Ready":"True"
	I0731 23:31:49.833923 1219947 pod_ready.go:81] duration metric: took 124.474838ms for pod "kube-scheduler-pause-343154" in "kube-system" namespace to be "Ready" ...
	I0731 23:31:49.833932 1219947 pod_ready.go:38] duration metric: took 2.454544512s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0731 23:31:49.833962 1219947 api_server.go:52] waiting for apiserver process to appear ...
	I0731 23:31:49.834036 1219947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:31:49.848669 1219947 api_server.go:72] duration metric: took 2.668777827s to wait for apiserver process to appear ...
	I0731 23:31:49.848708 1219947 api_server.go:88] waiting for apiserver healthz status ...
	I0731 23:31:49.848738 1219947 api_server.go:253] Checking apiserver healthz at https://192.168.61.235:8443/healthz ...
	I0731 23:31:49.854092 1219947 api_server.go:279] https://192.168.61.235:8443/healthz returned 200:
	ok
	I0731 23:31:49.855097 1219947 api_server.go:141] control plane version: v1.30.3
	I0731 23:31:49.855126 1219947 api_server.go:131] duration metric: took 6.407554ms to wait for apiserver health ...
	I0731 23:31:49.855136 1219947 system_pods.go:43] waiting for kube-system pods to appear ...
	I0731 23:31:50.036309 1219947 system_pods.go:59] 6 kube-system pods found
	I0731 23:31:50.036343 1219947 system_pods.go:61] "coredns-7db6d8ff4d-v29v8" [247719d4-4db6-4e42-aa5b-ee65d12de302] Running
	I0731 23:31:50.036348 1219947 system_pods.go:61] "etcd-pause-343154" [d7a2576d-d2f7-4cd2-96f2-1f78a32a859c] Running
	I0731 23:31:50.036352 1219947 system_pods.go:61] "kube-apiserver-pause-343154" [971e0137-ea8a-4b9d-83b3-14ce1308ee93] Running
	I0731 23:31:50.036355 1219947 system_pods.go:61] "kube-controller-manager-pause-343154" [19ba4fb3-bb6e-43f6-a1cd-7d8aef4daae1] Running
	I0731 23:31:50.036358 1219947 system_pods.go:61] "kube-proxy-262z4" [17405d1b-40da-4fdf-ae4e-0730d3737150] Running
	I0731 23:31:50.036361 1219947 system_pods.go:61] "kube-scheduler-pause-343154" [b8e8b790-47e3-450b-ad71-bd1d3c9001d2] Running
	I0731 23:31:50.036366 1219947 system_pods.go:74] duration metric: took 181.224658ms to wait for pod list to return data ...
	I0731 23:31:50.036374 1219947 default_sa.go:34] waiting for default service account to be created ...
	I0731 23:31:50.233918 1219947 default_sa.go:45] found service account: "default"
	I0731 23:31:50.233949 1219947 default_sa.go:55] duration metric: took 197.568227ms for default service account to be created ...
	I0731 23:31:50.233959 1219947 system_pods.go:116] waiting for k8s-apps to be running ...
	I0731 23:31:50.435696 1219947 system_pods.go:86] 6 kube-system pods found
	I0731 23:31:50.435731 1219947 system_pods.go:89] "coredns-7db6d8ff4d-v29v8" [247719d4-4db6-4e42-aa5b-ee65d12de302] Running
	I0731 23:31:50.435736 1219947 system_pods.go:89] "etcd-pause-343154" [d7a2576d-d2f7-4cd2-96f2-1f78a32a859c] Running
	I0731 23:31:50.435740 1219947 system_pods.go:89] "kube-apiserver-pause-343154" [971e0137-ea8a-4b9d-83b3-14ce1308ee93] Running
	I0731 23:31:50.435745 1219947 system_pods.go:89] "kube-controller-manager-pause-343154" [19ba4fb3-bb6e-43f6-a1cd-7d8aef4daae1] Running
	I0731 23:31:50.435748 1219947 system_pods.go:89] "kube-proxy-262z4" [17405d1b-40da-4fdf-ae4e-0730d3737150] Running
	I0731 23:31:50.435752 1219947 system_pods.go:89] "kube-scheduler-pause-343154" [b8e8b790-47e3-450b-ad71-bd1d3c9001d2] Running
	I0731 23:31:50.435759 1219947 system_pods.go:126] duration metric: took 201.794648ms to wait for k8s-apps to be running ...
	I0731 23:31:50.435771 1219947 system_svc.go:44] waiting for kubelet service to be running ....
	I0731 23:31:50.435822 1219947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:31:50.450675 1219947 system_svc.go:56] duration metric: took 14.888301ms WaitForService to wait for kubelet
	I0731 23:31:50.450714 1219947 kubeadm.go:582] duration metric: took 3.270832431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0731 23:31:50.450734 1219947 node_conditions.go:102] verifying NodePressure condition ...
	I0731 23:31:50.634591 1219947 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0731 23:31:50.634628 1219947 node_conditions.go:123] node cpu capacity is 2
	I0731 23:31:50.634640 1219947 node_conditions.go:105] duration metric: took 183.901344ms to run NodePressure ...
	I0731 23:31:50.634651 1219947 start.go:241] waiting for startup goroutines ...
	I0731 23:31:50.634658 1219947 start.go:246] waiting for cluster config update ...
	I0731 23:31:50.634665 1219947 start.go:255] writing updated cluster config ...
	I0731 23:31:50.743002 1219947 ssh_runner.go:195] Run: rm -f paused
	I0731 23:31:50.796583 1219947 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0731 23:31:51.066459 1219947 out.go:177] * Done! kubectl is now configured to use "pause-343154" cluster and "default" namespace by default
	I0731 23:31:52.188287 1220421 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.029764513s)
	I0731 23:31:52.188328 1220421 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0731 23:31:52.188391 1220421 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0731 23:31:52.193276 1220421 start.go:563] Will wait 60s for crictl version
	I0731 23:31:52.193356 1220421 ssh_runner.go:195] Run: which crictl
	I0731 23:31:52.197353 1220421 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0731 23:31:52.235130 1220421 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0731 23:31:52.235222 1220421 ssh_runner.go:195] Run: crio --version
	I0731 23:31:52.285320 1220421 ssh_runner.go:195] Run: crio --version
	I0731 23:31:52.341021 1220421 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0731 23:31:48.390180 1220182 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.186240092s)
	I0731 23:31:48.390232 1220182 crio.go:469] duration metric: took 3.186376298s to extract the tarball
	I0731 23:31:48.390241 1220182 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0731 23:31:48.432661 1220182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:31:48.468122 1220182 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0731 23:31:48.468159 1220182 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0731 23:31:48.468240 1220182 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:31:48.468247 1220182 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:48.468267 1220182 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:48.468295 1220182 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:48.468298 1220182 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:48.468320 1220182 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0731 23:31:48.468357 1220182 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:48.468370 1220182 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:48.469799 1220182 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:48.470326 1220182 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:48.470381 1220182 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:48.470381 1220182 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:48.470327 1220182 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:48.470327 1220182 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:48.470324 1220182 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0731 23:31:48.470331 1220182 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:31:48.625341 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0731 23:31:48.628277 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:48.634553 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:48.638308 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:48.646690 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:48.650505 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:48.680048 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:48.700483 1220182 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0731 23:31:48.700540 1220182 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0731 23:31:48.700597 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.766533 1220182 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0731 23:31:48.766646 1220182 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:48.766720 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.792074 1220182 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0731 23:31:48.792148 1220182 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:48.792204 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.801522 1220182 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0731 23:31:48.801628 1220182 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0731 23:31:48.801669 1220182 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:48.801731 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.801636 1220182 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:48.801811 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.801530 1220182 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0731 23:31:48.801891 1220182 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:48.801922 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.816033 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 23:31:48.816046 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:48.816073 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:48.816052 1220182 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0731 23:31:48.816141 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:48.816151 1220182 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:48.816178 1220182 ssh_runner.go:195] Run: which crictl
	I0731 23:31:48.816205 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:48.816147 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:48.919025 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:48.919108 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 23:31:48.924968 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:48.925015 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:48.924978 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:48.925155 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:48.925196 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:49.021900 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0731 23:31:49.021955 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0731 23:31:49.037264 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0731 23:31:49.037314 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0731 23:31:49.037317 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:49.037349 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0731 23:31:49.037391 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0731 23:31:49.086314 1220182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0731 23:31:49.100481 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0731 23:31:49.100639 1220182 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0731 23:31:49.114421 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0731 23:31:49.114551 1220182 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0731 23:31:49.157154 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0731 23:31:49.157255 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0731 23:31:49.157281 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0731 23:31:49.157312 1220182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0731 23:31:49.157282 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0731 23:31:49.157473 1220182 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0731 23:31:49.284155 1220182 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0731 23:31:49.284200 1220182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0731 23:31:49.284212 1220182 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0731 23:31:49.284247 1220182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0731 23:31:49.284316 1220182 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0731 23:31:49.284346 1220182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0731 23:31:49.284386 1220182 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0731 23:31:49.334240 1220182 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0731 23:31:49.334322 1220182 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0731 23:31:51.694937 1220182 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.360589777s)
	I0731 23:31:51.694968 1220182 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0731 23:31:51.694992 1220182 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0731 23:31:51.695042 1220182 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0731 23:31:52.145774 1220182 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0731 23:31:52.145836 1220182 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0731 23:31:52.145906 1220182 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0731 23:31:52.342183 1220421 main.go:141] libmachine: (running-upgrade-524949) Calling .GetIP
	I0731 23:31:52.345716 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:52.346105 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:c1:e7", ip: ""} in network mk-running-upgrade-524949: {Iface:virbr2 ExpiryTime:2024-08-01 00:31:06 +0000 UTC Type:0 Mac:52:54:00:3f:c1:e7 Iaid: IPaddr:192.168.83.53 Prefix:24 Hostname:running-upgrade-524949 Clientid:01:52:54:00:3f:c1:e7}
	I0731 23:31:52.346135 1220421 main.go:141] libmachine: (running-upgrade-524949) DBG | domain running-upgrade-524949 has defined IP address 192.168.83.53 and MAC address 52:54:00:3f:c1:e7 in network mk-running-upgrade-524949
	I0731 23:31:52.346389 1220421 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0731 23:31:52.351348 1220421 kubeadm.go:883] updating cluster {Name:running-upgrade-524949 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:runn
ing-upgrade-524949 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.53 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0731 23:31:52.351488 1220421 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0731 23:31:52.351558 1220421 ssh_runner.go:195] Run: sudo crictl images --output json
	I0731 23:31:52.393251 1220421 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0731 23:31:52.393324 1220421 ssh_runner.go:195] Run: which lz4
	I0731 23:31:52.397840 1220421 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0731 23:31:52.402369 1220421 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0731 23:31:52.402416 1220421 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	
	
	==> CRI-O <==
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.388295954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468714388257317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab4d0201-5d3b-4eb6-94aa-de49bf7e8c3b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.389407771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2b81a14-aa0c-4fa6-acbc-d27255b3d619 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.389522710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2b81a14-aa0c-4fa6-acbc-d27255b3d619 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.390237033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2b81a14-aa0c-4fa6-acbc-d27255b3d619 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.449118149Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6c92cee-e762-4cfc-8280-3ccfb061e71b name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.449244666Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6c92cee-e762-4cfc-8280-3ccfb061e71b name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.451164816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1136c7a1-085b-44f1-b7bc-53d9bc0acf09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.451846959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468714451798468,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1136c7a1-085b-44f1-b7bc-53d9bc0acf09 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.452824625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=786e87ff-d78e-495e-ac09-db8567608e79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.452911254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=786e87ff-d78e-495e-ac09-db8567608e79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.453480566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=786e87ff-d78e-495e-ac09-db8567608e79 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.512481280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ae8db79-4d4e-479d-a926-67a907e39e43 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.512560617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ae8db79-4d4e-479d-a926-67a907e39e43 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.513934837Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b8ee381-088f-477e-bd33-dac731ae1be7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.514487677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468714514461581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b8ee381-088f-477e-bd33-dac731ae1be7 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.515163034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4619c988-ed23-49f2-9807-d7d4652a1cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.515221489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4619c988-ed23-49f2-9807-d7d4652a1cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.515855385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4619c988-ed23-49f2-9807-d7d4652a1cc3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.575173085Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31b86577-ce89-493b-8f86-519c8dd7ef27 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.575285726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31b86577-ce89-493b-8f86-519c8dd7ef27 name=/runtime.v1.RuntimeService/Version
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.577851624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f868881e-eb9e-4e47-8bf4-1d5089317d5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.578369159Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722468714578335061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f868881e-eb9e-4e47-8bf4-1d5089317d5f name=/runtime.v1.ImageService/ImageFsInfo
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.579514583Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f213d515-03e8-4fc8-844d-f393c126ee6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.579600503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f213d515-03e8-4fc8-844d-f393c126ee6e name=/runtime.v1.RuntimeService/ListContainers
	Jul 31 23:31:54 pause-343154 crio[2272]: time="2024-07-31 23:31:54.579998913Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0,PodSandboxId:a0142d52dda7a52f28b3b4a15d326411f3e66081fb6aa365bbc620ff920cdff5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722468693427281750,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol
\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722468692957265578,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.u
id: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78,PodSandboxId:922012a74e0e469be6b440e93f1b5bc6861d2f39e3f7b9fff50cc993555136c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722468689326106235,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29eea09028
7bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842,PodSandboxId:5b1321b988fef8996007f3c02f26703be1d4f6fd522f1f5f60cf942c6da61d1b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722468689316370141,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7
a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722468689135131659,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca,PodSandboxId:e044eee34222149673bdb9d26cb1482475b902f154ff590b1c610c3bcfeb7245,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722468686586390910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io
.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e,PodSandboxId:24dcf66a7c4ee23cfb81ac4c46787816e899565b8384013c3e5330bc421fc376,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722468684367647368,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-262z4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17405d1b-40da-4fdf-ae4e-0730d3737150,},Annotations:map[string]string{io.kubernetes.container.hash: d5fb00
c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254,PodSandboxId:3f46b064e03250ccd9dbeeb07e1e70dd2480de1c359c7398bc0c410fc4bca980,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722468684125665095,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 175b71feebc2b5ce60321da73e189a6f,},Annotations:map[string]string{io.kubernetes.container.hash: 45b062aa,io.kubernetes.container.restartCount: 1,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e,PodSandboxId:cb0df8747f7480aaa01a66521ff380e797afa73fe78a7bc00a514e6a5c56785c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722468659543172576,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v29v8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 247719d4-4db6-4e42-aa5b-ee65d12de302,},Annotations:map[string]string{io.kubernetes.container.hash: d5d04ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199,PodSandboxId:274cdf30769595f71bdc44aa47b073fc63cb9d45868a0917d657972a84aac21d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722468639092835515,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-343154,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 3335ad788f6a94305addfdb6e616a236,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc,PodSandboxId:78688a7446259928a5bcfc2a3e880f5b288983897364aa265282faf354a2554b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722468639080729749,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-343154,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 7a40d95a87152af5b419f6d1733fe70d,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54,PodSandboxId:66735aa1c767011e136a5b6ad5745bd118a77d0aa50f4648fcefca990fef0378,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722468638908114818,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-343154,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 29eea090287bc0450b3fa070884ecb2d,},Annotations:map[string]string{io.kubernetes.container.hash: 677ec4c3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f213d515-03e8-4fc8-844d-f393c126ee6e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0eee0a568fc10       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   21 seconds ago       Running             coredns                   1                   a0142d52dda7a       coredns-7db6d8ff4d-v29v8
	1831912aa22ec       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   21 seconds ago       Running             kube-proxy                2                   24dcf66a7c4ee       kube-proxy-262z4
	0d0e402e1f5c2       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   25 seconds ago       Running             kube-apiserver            1                   922012a74e0e4       kube-apiserver-pause-343154
	c5828bbebd370       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   25 seconds ago       Running             kube-controller-manager   1                   5b1321b988fef       kube-controller-manager-pause-343154
	1bfb2d384a50d       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago       Running             etcd                      2                   3f46b064e0325       etcd-pause-343154
	cb247f23ee3bf       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   28 seconds ago       Running             kube-scheduler            1                   e044eee342221       kube-scheduler-pause-343154
	0a6163dc45dca       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   30 seconds ago       Exited              kube-proxy                1                   24dcf66a7c4ee       kube-proxy-262z4
	89a3312666995       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   30 seconds ago       Exited              etcd                      1                   3f46b064e0325       etcd-pause-343154
	4c9d4617012ae       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   55 seconds ago       Exited              coredns                   0                   cb0df8747f748       coredns-7db6d8ff4d-v29v8
	1c50233c020c9       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   About a minute ago   Exited              kube-scheduler            0                   274cdf3076959       kube-scheduler-pause-343154
	240c60cf8d397       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Exited              kube-controller-manager   0                   78688a7446259       kube-controller-manager-pause-343154
	a774d18a7254f       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   About a minute ago   Exited              kube-apiserver            0                   66735aa1c7670       kube-apiserver-pause-343154
	
	
	==> coredns [0eee0a568fc10e48d174b937e159e333ea46f5fa9499771854666273d45c8fd0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59063 - 54903 "HINFO IN 2354539187072714327.8093630560496879967. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03842797s
	
	
	==> coredns [4c9d4617012ae22820ae12d0b4b652cba31daacd8174ca8f4df2a02ba020f18e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51612 - 11218 "HINFO IN 2228504331175536696.7769828862450269. udp 54 false 512" NXDOMAIN qr,rd,ra 54 0.061741113s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-343154
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-343154
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad0431a8b539d85eadcca9b60d2c335055e9353c
	                    minikube.k8s.io/name=pause-343154
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_31T23_30_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 31 Jul 2024 23:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-343154
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 31 Jul 2024 23:31:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 31 Jul 2024 23:31:32 +0000   Wed, 31 Jul 2024 23:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.235
	  Hostname:    pause-343154
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ff36ba5e9be45d5b49993e7a22cb716
	  System UUID:                0ff36ba5-e9be-45d5-b499-93e7a22cb716
	  Boot ID:                    cec5f905-52ec-45ce-9596-90127bbdc0f1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-v29v8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-343154                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kube-apiserver-pause-343154             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-pause-343154    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-262z4                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-343154             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node pause-343154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node pause-343154 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node pause-343154 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node pause-343154 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  71s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node pause-343154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     71s                kubelet          Node pause-343154 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeReady                70s                kubelet          Node pause-343154 status is now: NodeReady
	  Normal  RegisteredNode           58s                node-controller  Node pause-343154 event: Registered Node pause-343154 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-343154 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-343154 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-343154 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-343154 event: Registered Node pause-343154 in Controller
	
	
	==> dmesg <==
	[  +8.108258] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.132140] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.179640] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.161251] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.298559] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.640115] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.061020] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.462455] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.595757] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.042483] systemd-fstab-generator[1284]: Ignoring "noauto" option for root device
	[  +0.104554] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.884676] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[  +0.155911] kauditd_printk_skb: 21 callbacks suppressed
	[Jul31 23:31] systemd-fstab-generator[2140]: Ignoring "noauto" option for root device
	[  +0.086939] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.064128] systemd-fstab-generator[2152]: Ignoring "noauto" option for root device
	[  +0.188886] systemd-fstab-generator[2167]: Ignoring "noauto" option for root device
	[  +0.153243] systemd-fstab-generator[2179]: Ignoring "noauto" option for root device
	[  +0.329155] systemd-fstab-generator[2207]: Ignoring "noauto" option for root device
	[  +1.057613] systemd-fstab-generator[2392]: Ignoring "noauto" option for root device
	[  +3.218580] kauditd_printk_skb: 158 callbacks suppressed
	[  +1.560244] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +4.531495] kauditd_printk_skb: 41 callbacks suppressed
	[ +12.795121] kauditd_printk_skb: 20 callbacks suppressed
	[  +1.467682] systemd-fstab-generator[3375]: Ignoring "noauto" option for root device
	
	
	==> etcd [1bfb2d384a50d4a6886f9f1aa7aa7fe2d48e44b81622384608c3938419560305] <==
	{"level":"info","ts":"2024-07-31T23:31:29.349797Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:29.349843Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:29.357423Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:31:29.357992Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-07-31T23:31:29.358117Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-07-31T23:31:29.361705Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:31:29.361634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:31:30.830252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:30.830324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:30.830376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgPreVoteResp from 5c9ce5d2cd86398f at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:30.830395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became candidate at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.830403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f received MsgVoteResp from 5c9ce5d2cd86398f at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.830414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became leader at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.830426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 5c9ce5d2cd86398f elected leader 5c9ce5d2cd86398f at term 3"}
	{"level":"info","ts":"2024-07-31T23:31:30.833069Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"5c9ce5d2cd86398f","local-member-attributes":"{Name:pause-343154 ClientURLs:[https://192.168.61.235:2379]}","request-path":"/0/members/5c9ce5d2cd86398f/attributes","cluster-id":"d507c5522fd9f0c3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:31:30.833128Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:31:30.833344Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-31T23:31:30.833362Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-31T23:31:30.83339Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-31T23:31:30.836476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.235:2379"}
	{"level":"info","ts":"2024-07-31T23:31:30.836641Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-31T23:31:49.695748Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.016952ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4147693258839274315 > lease_revoke:<id:398f910b2265b220>","response":"size:28"}
	{"level":"warn","ts":"2024-07-31T23:31:49.696131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.695723ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-343154\" ","response":"range_response_count:1 size:5427"}
	{"level":"info","ts":"2024-07-31T23:31:49.695966Z","caller":"traceutil/trace.go:171","msg":"trace[351586450] linearizableReadLoop","detail":"{readStateIndex:531; appliedIndex:530; }","duration":"273.500473ms","start":"2024-07-31T23:31:49.42245Z","end":"2024-07-31T23:31:49.695951Z","steps":["trace[351586450] 'read index received'  (duration: 13.09142ms)","trace[351586450] 'applied index is now lower than readState.Index'  (duration: 260.407755ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-31T23:31:49.696184Z","caller":"traceutil/trace.go:171","msg":"trace[200012709] range","detail":"{range_begin:/registry/minions/pause-343154; range_end:; response_count:1; response_revision:493; }","duration":"273.789832ms","start":"2024-07-31T23:31:49.42238Z","end":"2024-07-31T23:31:49.69617Z","steps":["trace[200012709] 'agreement among raft nodes before linearized reading'  (duration: 273.674148ms)"],"step_count":1}
	
	
	==> etcd [89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254] <==
	{"level":"info","ts":"2024-07-31T23:31:24.507609Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"9.77794ms"}
	{"level":"info","ts":"2024-07-31T23:31:24.52238Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-07-31T23:31:24.534894Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","commit-index":437}
	{"level":"info","ts":"2024-07-31T23:31:24.539852Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=()"}
	{"level":"info","ts":"2024-07-31T23:31:24.539929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f became follower at term 2"}
	{"level":"info","ts":"2024-07-31T23:31:24.539973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 5c9ce5d2cd86398f [peers: [], term: 2, commit: 437, applied: 0, lastindex: 437, lastterm: 2]"}
	{"level":"warn","ts":"2024-07-31T23:31:24.719558Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-07-31T23:31:24.965363Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":420}
	{"level":"info","ts":"2024-07-31T23:31:25.244838Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-07-31T23:31:25.37621Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"5c9ce5d2cd86398f","timeout":"7s"}
	{"level":"info","ts":"2024-07-31T23:31:25.376469Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"5c9ce5d2cd86398f"}
	{"level":"info","ts":"2024-07-31T23:31:25.376526Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"5c9ce5d2cd86398f","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-31T23:31:25.376824Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-31T23:31:25.37703Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:31:25.378123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"5c9ce5d2cd86398f switched to configuration voters=(6673461441410251151)"}
	{"level":"info","ts":"2024-07-31T23:31:25.378368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","added-peer-id":"5c9ce5d2cd86398f","added-peer-peer-urls":["https://192.168.61.235:2380"]}
	{"level":"info","ts":"2024-07-31T23:31:25.378913Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d507c5522fd9f0c3","local-member-id":"5c9ce5d2cd86398f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:25.379001Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-31T23:31:25.377143Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:31:25.379631Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-31T23:31:25.381163Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-31T23:31:25.381384Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"5c9ce5d2cd86398f","initial-advertise-peer-urls":["https://192.168.61.235:2380"],"listen-peer-urls":["https://192.168.61.235:2380"],"advertise-client-urls":["https://192.168.61.235:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.235:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-31T23:31:25.381452Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-31T23:31:25.381565Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.235:2380"}
	{"level":"info","ts":"2024-07-31T23:31:25.381588Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.235:2380"}
	
	
	==> kernel <==
	 23:31:55 up 1 min,  0 users,  load average: 1.50, 0.56, 0.20
	Linux pause-343154 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0d0e402e1f5c20bc523700d27f2c0e6bb8423cf66f3985fa8ef5337887f0ad78] <==
	I0731 23:31:32.569971       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0731 23:31:32.649635       1 shared_informer.go:320] Caches are synced for configmaps
	I0731 23:31:32.649795       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0731 23:31:32.652071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0731 23:31:32.660225       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0731 23:31:32.660279       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0731 23:31:32.662468       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0731 23:31:32.662543       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0731 23:31:32.662650       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0731 23:31:32.664462       1 aggregator.go:165] initial CRD sync complete...
	I0731 23:31:32.664505       1 autoregister_controller.go:141] Starting autoregister controller
	I0731 23:31:32.664514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0731 23:31:32.664522       1 cache.go:39] Caches are synced for autoregister controller
	I0731 23:31:32.664560       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0731 23:31:32.664474       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0731 23:31:32.664744       1 policy_source.go:224] refreshing policies
	I0731 23:31:32.702772       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:31:33.461643       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:31:34.451660       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:31:34.471288       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:31:34.527974       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:31:34.574765       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:31:34.586893       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:31:45.739739       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 23:31:45.937715       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [a774d18a7254f6d800d5381f4e49913ea1b5b7ad036aead2b86dd286929c1f54] <==
	I0731 23:30:41.681037       1 controller.go:615] quota admission added evaluator for: namespaces
	E0731 23:30:41.693238       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0731 23:30:41.839517       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0731 23:30:42.471496       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0731 23:30:42.476468       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0731 23:30:42.476497       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0731 23:30:43.242141       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0731 23:30:43.303740       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0731 23:30:43.413471       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0731 23:30:43.421912       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.235]
	I0731 23:30:43.423049       1 controller.go:615] quota admission added evaluator for: endpoints
	I0731 23:30:43.432010       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0731 23:30:43.587320       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0731 23:30:44.502206       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0731 23:30:44.521822       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0731 23:30:44.537127       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0731 23:30:57.632634       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0731 23:30:57.857287       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0731 23:31:15.526253       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W0731 23:31:15.547426       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.547506       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.548464       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.549237       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.549392       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0731 23:31:15.549478       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [240c60cf8d397f37e67c3e43694db8126659e80dddf59faf36bec8b549860cfc] <==
	I0731 23:30:57.741967       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-343154"
	I0731 23:30:57.742062       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0731 23:30:57.742125       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0731 23:30:57.742561       1 shared_informer.go:320] Caches are synced for GC
	I0731 23:30:57.738170       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 23:30:57.784456       1 shared_informer.go:320] Caches are synced for disruption
	I0731 23:30:57.793483       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 23:30:57.812775       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 23:30:57.823621       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="173.765325ms"
	I0731 23:30:57.845975       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:30:57.846253       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:30:57.851961       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.190635ms"
	I0731 23:30:57.852107       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="76.361µs"
	I0731 23:30:57.852194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.987µs"
	I0731 23:30:58.317285       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:30:58.335033       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:30:58.335093       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0731 23:30:58.889327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.057296ms"
	I0731 23:30:58.910358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.533254ms"
	I0731 23:30:58.912965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="98.88µs"
	I0731 23:30:58.916606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="92.664µs"
	I0731 23:30:58.941723       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="168.083µs"
	I0731 23:31:00.839519       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="137.779µs"
	I0731 23:31:02.057396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="270.302613ms"
	I0731 23:31:02.057912       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.156µs"
	
	
	==> kube-controller-manager [c5828bbebd37055344533d35a31c3d3e9f00f08746995ec6598ce8159deb3842] <==
	I0731 23:31:45.734815       1 shared_informer.go:320] Caches are synced for TTL
	I0731 23:31:45.734935       1 shared_informer.go:320] Caches are synced for HPA
	I0731 23:31:45.735254       1 shared_informer.go:320] Caches are synced for persistent volume
	I0731 23:31:45.736439       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0731 23:31:45.740068       1 shared_informer.go:320] Caches are synced for PV protection
	I0731 23:31:45.742467       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0731 23:31:45.745133       1 shared_informer.go:320] Caches are synced for PVC protection
	I0731 23:31:45.750128       1 shared_informer.go:320] Caches are synced for endpoint
	I0731 23:31:45.756119       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0731 23:31:45.763666       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0731 23:31:45.765044       1 shared_informer.go:320] Caches are synced for ephemeral
	I0731 23:31:45.768496       1 shared_informer.go:320] Caches are synced for GC
	I0731 23:31:45.815483       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0731 23:31:45.819845       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0731 23:31:45.820321       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="322.63µs"
	I0731 23:31:45.834629       1 shared_informer.go:320] Caches are synced for deployment
	I0731 23:31:45.838147       1 shared_informer.go:320] Caches are synced for disruption
	I0731 23:31:45.852856       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:31:45.880189       1 shared_informer.go:320] Caches are synced for resource quota
	I0731 23:31:45.913248       1 shared_informer.go:320] Caches are synced for namespace
	I0731 23:31:45.926022       1 shared_informer.go:320] Caches are synced for attach detach
	I0731 23:31:45.934780       1 shared_informer.go:320] Caches are synced for service account
	I0731 23:31:46.388199       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:31:46.435217       1 shared_informer.go:320] Caches are synced for garbage collector
	I0731 23:31:46.435318       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e] <==
	I0731 23:31:24.643232       1 server_linux.go:69] "Using iptables proxy"
	E0731 23:31:24.658067       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-343154\": dial tcp 192.168.61.235:8443: connect: connection refused"
	
	
	==> kube-proxy [1831912aa22eccc61e9ae801f143c1be982f9fbfae43abc9dadaf76114146b2e] <==
	I0731 23:31:33.168664       1 server_linux.go:69] "Using iptables proxy"
	I0731 23:31:33.184751       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.235"]
	I0731 23:31:33.256254       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0731 23:31:33.256349       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0731 23:31:33.256368       1 server_linux.go:165] "Using iptables Proxier"
	I0731 23:31:33.261888       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0731 23:31:33.262099       1 server.go:872] "Version info" version="v1.30.3"
	I0731 23:31:33.262117       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0731 23:31:33.264367       1 config.go:192] "Starting service config controller"
	I0731 23:31:33.264382       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0731 23:31:33.264406       1 config.go:101] "Starting endpoint slice config controller"
	I0731 23:31:33.264411       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0731 23:31:33.270440       1 config.go:319] "Starting node config controller"
	I0731 23:31:33.270456       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0731 23:31:33.365100       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0731 23:31:33.365168       1 shared_informer.go:320] Caches are synced for service config
	I0731 23:31:33.370648       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1c50233c020c9d08dcb523b543dfe9dfc885fe0bd32cd9dbfdea347c8dc7f199] <==
	E0731 23:30:41.599580       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:30:41.602283       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:30:41.602581       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:30:42.457290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:30:42.457417       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:30:42.493545       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:30:42.493591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:30:42.626784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:30:42.627763       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:30:42.716852       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0731 23:30:42.716918       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0731 23:30:42.767394       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:30:42.767448       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:30:42.804495       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:30:42.804555       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:30:42.880375       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:30:42.880435       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:30:42.901587       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:30:42.901659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0731 23:30:42.920915       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:30:42.920961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:30:42.921025       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:30:42.921050       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0731 23:30:45.790340       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0731 23:31:15.531148       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cb247f23ee3bf1ed4c004429a2900223d82221ab545c16d9b4d169b56470e5ca] <==
	W0731 23:31:32.557200       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0731 23:31:32.557234       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0731 23:31:32.557308       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0731 23:31:32.557338       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0731 23:31:32.557405       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0731 23:31:32.557434       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0731 23:31:32.557505       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0731 23:31:32.557535       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0731 23:31:32.557610       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0731 23:31:32.557640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0731 23:31:32.561868       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0731 23:31:32.561938       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0731 23:31:32.562064       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0731 23:31:32.562101       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0731 23:31:32.562192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0731 23:31:32.562237       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0731 23:31:32.562358       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0731 23:31:32.562404       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0731 23:31:32.562472       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0731 23:31:32.562508       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0731 23:31:32.562570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0731 23:31:32.562604       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0731 23:31:32.562710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0731 23:31:32.562770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0731 23:31:32.659773       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 31 23:31:28 pause-343154 kubelet[2806]: E0731 23:31:28.855764    2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-343154?timeout=10s\": dial tcp 192.168.61.235:8443: connect: connection refused" interval="400ms"
	Jul 31 23:31:28 pause-343154 kubelet[2806]: I0731 23:31:28.954645    2806 kubelet_node_status.go:73] "Attempting to register node" node="pause-343154"
	Jul 31 23:31:28 pause-343154 kubelet[2806]: E0731 23:31:28.956663    2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.235:8443: connect: connection refused" node="pause-343154"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.027202    2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.61.235:8443: connect: connection refused" event="&Event{ObjectMeta:{pause-343154.17e7702572a7ba16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:pause-343154,UID:pause-343154,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:pause-343154,},FirstTimestamp:2024-07-31 23:31:28.632199702 +0000 UTC m=+0.102585905,LastTimestamp:2024-07-31 23:31:28.632199702 +0000 UTC m=+0.102585905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:pause-343154,}"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: I0731 23:31:29.116927    2806 scope.go:117] "RemoveContainer" containerID="89a33126669959cc88c32882a374df02aaa48187e325bad84aa8b3103f06e254"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.257286    2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-343154?timeout=10s\": dial tcp 192.168.61.235:8443: connect: connection refused" interval="800ms"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: I0731 23:31:29.359986    2806 kubelet_node_status.go:73] "Attempting to register node" node="pause-343154"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.361128    2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.235:8443: connect: connection refused" node="pause-343154"
	Jul 31 23:31:29 pause-343154 kubelet[2806]: W0731 23:31:29.534292    2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.534364    2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:29 pause-343154 kubelet[2806]: W0731 23:31:29.539449    2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:29 pause-343154 kubelet[2806]: E0731 23:31:29.539533    2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.61.235:8443: connect: connection refused
	Jul 31 23:31:30 pause-343154 kubelet[2806]: I0731 23:31:30.163530    2806 kubelet_node_status.go:73] "Attempting to register node" node="pause-343154"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.632329    2806 apiserver.go:52] "Watching apiserver"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.636791    2806 topology_manager.go:215] "Topology Admit Handler" podUID="247719d4-4db6-4e42-aa5b-ee65d12de302" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v29v8"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.637206    2806 topology_manager.go:215] "Topology Admit Handler" podUID="17405d1b-40da-4fdf-ae4e-0730d3737150" podNamespace="kube-system" podName="kube-proxy-262z4"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.649222    2806 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.704813    2806 kubelet_node_status.go:112] "Node was previously registered" node="pause-343154"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.705130    2806 kubelet_node_status.go:76] "Successfully registered node" node="pause-343154"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.705489    2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17405d1b-40da-4fdf-ae4e-0730d3737150-xtables-lock\") pod \"kube-proxy-262z4\" (UID: \"17405d1b-40da-4fdf-ae4e-0730d3737150\") " pod="kube-system/kube-proxy-262z4"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.705612    2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17405d1b-40da-4fdf-ae4e-0730d3737150-lib-modules\") pod \"kube-proxy-262z4\" (UID: \"17405d1b-40da-4fdf-ae4e-0730d3737150\") " pod="kube-system/kube-proxy-262z4"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.712053    2806 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.714413    2806 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 31 23:31:32 pause-343154 kubelet[2806]: I0731 23:31:32.938370    2806 scope.go:117] "RemoveContainer" containerID="0a6163dc45dcaed4fc879b05a4d856d989306a7d30f27c2be8dc2daf6caa467e"
	Jul 31 23:31:36 pause-343154 kubelet[2806]: I0731 23:31:36.106080    2806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-343154 -n pause-343154
helpers_test.go:261: (dbg) Run:  kubectl --context pause-343154 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (52.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (7200.063s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
E0731 23:54:53.720048 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.198:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.198:8443: connect: connection refused
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (23m26s)
	TestStartStop (25m21s)
	TestStartStop/group/default-k8s-diff-port (18m55s)
	TestStartStop/group/default-k8s-diff-port/serial (18m55s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5m15s)
	TestStartStop/group/embed-certs (20m27s)
	TestStartStop/group/embed-certs/serial (20m27s)
	TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5m47s)
	TestStartStop/group/no-preload (20m31s)
	TestStartStop/group/no-preload/serial (20m31s)
	TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (4m44s)
	TestStartStop/group/old-k8s-version (20m41s)
	TestStartStop/group/old-k8s-version/serial (20m41s)
	TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (2m18s)

                                                
                                                
goroutine 3930 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 20 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00001ad00, 0xc0004d3bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000aa82a0, {0x49d6120, 0x2b, 0x2b}, {0x26b7065?, 0xc0009f1680?, 0x4a92a60?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0001e6be0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0001e6be0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 7 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00064fa80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1875 [chan receive, 26 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0013d84e0, 0x313f860)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1797
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2495 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2494
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 534 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc000867980, 0xc001a1a780)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 533
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 84 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 83
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 2413 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf1e0, 0xc000060060}, 0xc00139ef50, 0xc0000acf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf1e0, 0xc000060060}, 0xd0?, 0xc00139ef50, 0xc00139ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf1e0?, 0xc000060060?}, 0xc00139efb0?, 0x7b8e18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00139efd0?, 0x592e44?, 0xc001a1a420?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2452
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2372 [chan receive, 2 minutes]:
testing.(*T).Run(0xc00001a680, {0x2688464?, 0x60400000004?}, 0xc000b84480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00001a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00001a680, 0xc00180e000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1876
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2414 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2413
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1742 [chan receive, 23 minutes]:
testing.(*T).Run(0xc0015b2000, {0x265c689?, 0x55127c?}, 0xc002808018)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0015b2000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0015b2000, 0x313f640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 412 [chan receive, 77 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b94d00, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1940 [chan receive, 23 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001a82820, 0xc002808018)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3210 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3209
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2434 [chan receive, 6 minutes]:
testing.(*T).Run(0xc000c6a1a0, {0x2688464?, 0x60400000004?}, 0xc000520480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000c6a1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc000c6a1a0, 0xc001bba180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1881
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 457 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000b94cd0, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00137a9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b94d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006b8de0, {0x369b180, 0xc0004d5080}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b8de0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 412
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 730 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc00168cf00, 0xc001381f80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 729
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 262 [IO wait, 79 minutes]:
internal/poll.runtime_pollWait(0x7ff9341b9cb0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0001ac600)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0001ac600)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00079a260)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00079a260)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0008720f0, {0x36b2020, 0xc00079a260})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0008720f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x0?, 0xc00001a4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 259
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 1996 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bfcd00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bfcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bfcd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bfcd00, 0xc0001ac980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 411 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00137aae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 383
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3208 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001dc4190, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001923140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001dc41c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002742330, {0x369b180, 0xc0009e4b10}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002742330, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3228
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2451 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0004ab200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2408
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2084 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d81a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d81a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d81a0, 0xc00180e080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1993 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bfc1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bfc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bfc1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bfc1a0, 0xc0001ac800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2412 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000984990, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0004aafc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009849c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00012b7b0, {0x369b180, 0xc001cd2210}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00012b7b0, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2452
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 842 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b10000, 0xc00196ca80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 370
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 764 [select, 76 minutes]:
net/http.(*persistConn).readLoop(0xc0016e7680)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 762
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3227 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001923260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3226
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2763 [runnable]:
syscall.Syscall(0x0, 0xc, 0xc001934800, 0x800)
	/usr/local/go/src/syscall/syscall_linux.go:69 +0x25
syscall.read(0xc001bbbf00?, {0xc001934800?, 0x700?, 0xc0007e07a8?})
	/usr/local/go/src/syscall/zsyscall_linux_amd64.go:736 +0x38
syscall.Read(...)
	/usr/local/go/src/syscall/syscall_unix.go:181
internal/poll.ignoringEINTRIO(...)
	/usr/local/go/src/internal/poll/fd_unix.go:736
internal/poll.(*FD).Read(0xc001bbbf00, {0xc001934800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:160 +0x2ae
net.(*netFD).Read(0xc001bbbf00, {0xc001934800?, 0xc0014a8500?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000778ad8, {0xc001934800?, 0xc00193485f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001cd0420, {0xc001934800?, 0x0?, 0xc001cd0420?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0001b29b0, {0x369b920, 0xc001cd0420})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0001b2708, {0x7ff92c4a6f18, 0xc001a4faa0}, 0xc0007e0980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0001b2708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0001b2708, {0xc0014fb000, 0x1000, 0xc001b9d500?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001a812c0, {0xc002761380, 0x9, 0x4991c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3699dc0, 0xc001a812c0}, {0xc002761380, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc002761380, 0x9, 0x7e0dc0?}, {0x3699dc0?, 0xc001a812c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002761340)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc0007e0fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001488d80)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2762
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 765 [select, 76 minutes]:
net/http.(*persistConn).writeLoop(0xc0016e7680)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 762
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 599 [chan send, 76 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001b0900, 0xc001ada180)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 598
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 458 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf1e0, 0xc000060060}, 0xc00139cf50, 0xc000672f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf1e0, 0xc000060060}, 0x40?, 0xc00139cf50, 0xc00139cf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf1e0?, 0xc000060060?}, 0x6e6f697372655673?, 0x332e30332e31763a?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00139cfd0?, 0x592e44?, 0xc00010eb40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 412
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 459 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 458
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2452 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009849c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2408
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2591 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9341b97d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00180ea80?, 0xc000b9b800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00180ea80, {0xc000b9b800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00180ea80, {0xc000b9b800?, 0x7ff92c449dd8?, 0xc001cd0378?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc00051a0c0, {0xc000b9b800?, 0xc000673938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001cd0378, {0xc000b9b800?, 0x0?, 0xc001cd0378?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0018d70b0, {0x369b920, 0xc001cd0378})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0018d6e08, {0x369acc0, 0xc00051a0c0}, 0xc000673980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0018d6e08, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0018d6e08, {0xc00070d000, 0x1000, 0xc0014336c0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001b96d20, {0xc001894120, 0x9, 0x4991c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3699dc0, 0xc001b96d20}, {0xc001894120, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc001894120, 0x9, 0x673dc0?}, {0x3699dc0?, 0xc001b96d20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0018940e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000673fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000866000)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2590
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3209 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf1e0, 0xc000060060}, 0xc000c77f50, 0xc000c77f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf1e0, 0xc000060060}, 0x16?, 0xc000c77f50, 0xc000c77f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf1e0?, 0xc000060060?}, 0x99b656?, 0xc000223200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000223200?, 0x592e44?, 0xc0000d4200?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3228
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2646 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bf020, 0xc000636380}, {0x36b2710, 0xc001a2f120}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bf020?, 0xc0000360e0?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bf020, 0xc0000360e0}, 0xc000c6a340, {0xc0018c3818, 0x12}, {0x26826e8, 0x14}, {0x269a2c2, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36bf020, 0xc0000360e0}, 0xc000c6a340, {0xc0018c3818, 0x12}, {0x2669a6b?, 0xc0018f2760?}, {0x551133?, 0x4a170f?}, {0xc000ba3800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000c6a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000c6a340, 0xc000520480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2434
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2513 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b94d80, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2465
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2493 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc000b94d50, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2149aa0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00137b620)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b94d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00192a350, {0x369b180, 0xc001414b70}, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00192a350, 0x3b9aca00, 0x0, 0x1, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1994 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bfc340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bfc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bfc340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bfc340, 0xc0001ac880)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2377 [chan receive, 6 minutes]:
testing.(*T).Run(0xc00001ab60, {0x2688464?, 0x60400000004?}, 0xc001bbb200)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00001ab60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00001ab60, 0xc00180e380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1879
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1995 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000bfc4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000bfc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000bfc4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000bfc4e0, 0xc0001ac900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1941 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001a82b60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001a82b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001a82b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001a82b60, 0xc000520100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3228 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001dc41c0, 0xc000060060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3226
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2694 [IO wait]:
internal/poll.runtime_pollWait(0x7ff9341b94f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0000d5980?, 0xc001934000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0000d5980, {0xc001934000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc0000d5980, {0xc001934000?, 0xc000576640?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000778928, {0xc001934000?, 0xc00193405f?, 0x6f?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc001cd03d8, {0xc001934000?, 0x0?, 0xc001cd03d8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc000004d30, {0x369b920, 0xc001cd03d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000004a88, {0x7ff92c4a6f18, 0xc001a4f0c8}, 0xc000676980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000004a88, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000004a88, {0xc00196f000, 0x1000, 0xc0014336c0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001c22fc0, {0xc002760f20, 0x9, 0x4991c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3699dc0, 0xc001c22fc0}, {0xc002760f20, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc002760f20, 0x9, 0x676dc0?}, {0x3699dc0?, 0xc001c22fc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc002760ee0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000676fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0019a1200)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2693
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 1797 [chan receive, 26 minutes]:
testing.(*T).Run(0xc0013d8340, {0x265c689?, 0x551133?}, 0x313f860)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0013d8340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0013d8340, 0x313f688)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1876 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0013d8b60, {0x265dc2f?, 0x0?}, 0xc00180e000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d8b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013d8b60, 0xc0008ec780)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1875
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1877 [chan receive, 26 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d8ea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d8ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0013d8ea0, 0xc0008eca80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1875
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1878 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0013d9040, {0x265dc2f?, 0x0?}, 0xc000b84400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d9040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013d9040, 0xc0008ecac0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1875
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1879 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0013d91e0, {0x265dc2f?, 0x0?}, 0xc00180e380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d91e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013d91e0, 0xc0008ecb00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1875
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2432 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001a829c0, {0x2688464?, 0x60400000004?}, 0xc0000d4b00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001a829c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001a829c0, 0xc000b84400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1878
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1881 [chan receive, 20 minutes]:
testing.(*T).Run(0xc0013d96c0, {0x265dc2f?, 0x0?}, 0xc001bba180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0013d96c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0013d96c0, 0xc0008ecbc0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1875
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2085 [chan receive, 23 minutes]:
testing.(*testContext).waitParallel(0xc00071b720)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0013d9860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0013d9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0013d9860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0013d9860, 0xc00180e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1940
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3226 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bf020, 0xc0004f92d0}, {0x36b2710, 0xc000578f60}, 0x1, 0x0, 0xc0024a9c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bf020?, 0xc0003de000?}, 0x3b9aca00, 0xc000aa3e10?, 0x1, 0xc000aa3c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bf020, 0xc0003de000}, 0xc00001aea0, {0xc0017981b0, 0x16}, {0x26826e8, 0x14}, {0x269a2c2, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36bf020, 0xc0003de000}, 0xc00001aea0, {0xc0017981b0, 0x16}, {0x2673a13?, 0xc000094760?}, {0x551133?, 0x4a170f?}, {0xc001570480, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00001aea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00001aea0, 0xc000b84480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2372
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2672 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bf020, 0xc00003a310}, {0x36b2710, 0xc0013ba400}, 0x1, 0x0, 0xc0024adc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bf020?, 0xc000174380?}, 0x3b9aca00, 0xc000c63e10?, 0x1, 0xc000c63c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bf020, 0xc000174380}, 0xc001a831e0, {0xc001b3c2a0, 0x1c}, {0x26826e8, 0x14}, {0x269a2c2, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36bf020, 0xc000174380}, 0xc001a831e0, {0xc001b3c2a0, 0x1c}, {0x26855e2?, 0xc0018f3f60?}, {0x551133?, 0x4a170f?}, {0xc0000f4800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001a831e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001a831e0, 0xc0000d4b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2432
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2771 [runnable]:
golang.org/x/net/http2.(*ClientConn).roundTrip(0xc001488d80, 0xc00199e240, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:1379 +0x52c
golang.org/x/net/http2.(*ClientConn).RoundTrip(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:1276
golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc00183a140, 0xc00199e240, {0x20?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:602 +0x1ae
golang.org/x/net/http2.(*Transport).RoundTrip(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:560
golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip({0xc00011e3c0?}, 0xc00199e240?)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:3226 +0x1a
net/http.(*Transport).roundTrip(0xc00011e3c0, 0xc00199e240)
	/usr/local/go/src/net/http/transport.go:553 +0x39c
net/http.(*Transport).RoundTrip(0x243bc00?, 0xc001a51aa0?)
	/usr/local/go/src/net/http/roundtrip.go:17 +0x13
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0016a3020, 0xc00199e120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/round_trippers.go:168 +0x326
net/http.send(0xc00199e120, {0x369f280, 0xc0016a3020}, {0x414601?, 0x2c?, 0x0?})
	/usr/local/go/src/net/http/client.go:259 +0x5e4
net/http.(*Client).send(0xc0014866f0, 0xc00199e120, {0x0?, 0xc00199e120?, 0x0?})
	/usr/local/go/src/net/http/client.go:180 +0x98
net/http.(*Client).do(0xc0014866f0, 0xc00199e120)
	/usr/local/go/src/net/http/client.go:724 +0x8dc
net/http.(*Client).Do(...)
	/usr/local/go/src/net/http/client.go:590
k8s.io/client-go/rest.(*Request).request(0xc00199e000, {0x36bf020, 0xc00003c700}, 0xc001f40e20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/rest/request.go:1023 +0x397
k8s.io/client-go/rest.(*Request).Do(0xc00199e000, {0x36bf020, 0xc00003c700})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/rest/request.go:1063 +0xc5
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc00276f1e0, {0x36bf020, 0xc00003c700}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x269a2c2, 0x1c}, {0x0, ...}, ...})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/kubernetes/typed/core/v1/pod.go:99 +0x165
k8s.io/minikube/test/integration.PodWait.func1({0x36bf020, 0xc00003c700})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:327 +0x10b
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2(0xc001f419d0?, {0x36bf020?, 0xc00003c700?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:87 +0x52
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bf020, 0xc00003c700}, {0x36b2710, 0xc0016a3da0}, 0x1, 0x0, 0xc001f41c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:88 +0x24d
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bf020?, 0xc0003de070?}, 0x3b9aca00, 0xc001515e10?, 0x1, 0xc001515c18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bf020, 0xc0003de070}, 0xc00001a820, {0xc001798360, 0x11}, {0x26826e8, 0x14}, {0x269a2c2, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36bf020, 0xc0003de070}, 0xc00001a820, {0xc001798360, 0x11}, {0x266785d?, 0xc00139cf60?}, {0x551133?, 0x4a170f?}, {0xc000ba3700, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00001a820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00001a820, 0xc001bbb200)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2377
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2494 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bf1e0, 0xc000060060}, 0xc000c71f50, 0xc000671f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bf1e0, 0xc000060060}, 0xe0?, 0xc000c71f50, 0xc000c71f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bf1e0?, 0xc000060060?}, 0xc0015b21a0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00192a310?, 0xc00275c318?, 0xc000c71fa8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2513
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2512 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00137b740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2465
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3929 [runnable]:
golang.org/x/net/http2.(*clientStream).writeRequest(0xc000867380, 0xc00199e240, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:1532 +0xa85
golang.org/x/net/http2.(*clientStream).doRequest(0xc000867380, 0xc000648140?, 0xc0000ffd90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:1410 +0x56
created by golang.org/x/net/http2.(*ClientConn).roundTrip in goroutine 2771
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:1315 +0x3e5

                                                
                                    

Test pass (175/216)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.81
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 4.23
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 5.31
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.14
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.59
31 TestOffline 95.05
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
37 TestCertOptions 75.03
38 TestCertExpiration 255.04
40 TestForceSystemdFlag 71.56
41 TestForceSystemdEnv 47.56
43 TestKVMDriverInstallOrUpdate 3.77
47 TestErrorSpam/setup 38.68
48 TestErrorSpam/start 0.35
49 TestErrorSpam/status 0.76
50 TestErrorSpam/pause 1.56
51 TestErrorSpam/unpause 1.59
52 TestErrorSpam/stop 4.27
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 64.29
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 39.61
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
64 TestFunctional/serial/CacheCmd/cache/add_local 2.03
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.76
69 TestFunctional/serial/CacheCmd/cache/delete 0.1
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 31.79
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.36
75 TestFunctional/serial/LogsFileCmd 1.38
76 TestFunctional/serial/InvalidService 5.99
78 TestFunctional/parallel/ConfigCmd 0.33
79 TestFunctional/parallel/DashboardCmd 11.12
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.14
82 TestFunctional/parallel/StatusCmd 0.77
86 TestFunctional/parallel/ServiceCmdConnect 15.07
87 TestFunctional/parallel/AddonsCmd 0.15
88 TestFunctional/parallel/PersistentVolumeClaim 38.48
90 TestFunctional/parallel/SSHCmd 0.43
91 TestFunctional/parallel/CpCmd 1.3
92 TestFunctional/parallel/MySQL 20.41
93 TestFunctional/parallel/FileSync 0.22
94 TestFunctional/parallel/CertSync 1.33
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
102 TestFunctional/parallel/License 0.23
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
104 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
108 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
109 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.19
111 TestFunctional/parallel/ImageCommands/Setup 1.52
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.5
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 21.23
118 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
119 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.11
120 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.32
121 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
122 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
123 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.17
124 TestFunctional/parallel/MountCmd/any-port 13.49
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
131 TestFunctional/parallel/ServiceCmd/DeployApp 12.15
132 TestFunctional/parallel/MountCmd/specific-port 1.66
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.45
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.29
135 TestFunctional/parallel/ProfileCmd/profile_list 0.31
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
137 TestFunctional/parallel/Version/short 0.05
138 TestFunctional/parallel/Version/components 0.66
139 TestFunctional/parallel/ServiceCmd/List 0.88
140 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
142 TestFunctional/parallel/ServiceCmd/Format 0.54
143 TestFunctional/parallel/ServiceCmd/URL 0.56
144 TestFunctional/delete_echo-server_images 0.03
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestMultiControlPlane/serial/StartCluster 209.68
151 TestMultiControlPlane/serial/DeployApp 6.64
152 TestMultiControlPlane/serial/PingHostFromPods 1.2
153 TestMultiControlPlane/serial/AddWorkerNode 54.17
154 TestMultiControlPlane/serial/NodeLabels 0.07
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
156 TestMultiControlPlane/serial/CopyFile 13.08
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
162 TestMultiControlPlane/serial/DeleteSecondaryNode 17.18
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.39
165 TestMultiControlPlane/serial/RestartCluster 354.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
167 TestMultiControlPlane/serial/AddSecondaryNode 76.2
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.57
172 TestJSONOutput/start/Command 57.52
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.72
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.64
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.38
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.21
200 TestMainNoArgs 0.05
201 TestMinikubeProfile 90.86
204 TestMountStart/serial/StartWithMountFirst 26.82
205 TestMountStart/serial/VerifyMountFirst 0.4
206 TestMountStart/serial/StartWithMountSecond 24.35
207 TestMountStart/serial/VerifyMountSecond 0.4
208 TestMountStart/serial/DeleteFirst 0.93
209 TestMountStart/serial/VerifyMountPostDelete 0.4
210 TestMountStart/serial/Stop 1.29
211 TestMountStart/serial/RestartStopped 22.43
212 TestMountStart/serial/VerifyMountPostStop 0.39
215 TestMultiNode/serial/FreshStart2Nodes 125.44
216 TestMultiNode/serial/DeployApp2Nodes 4.86
217 TestMultiNode/serial/PingHostFrom2Pods 0.83
218 TestMultiNode/serial/AddNode 50.73
219 TestMultiNode/serial/MultiNodeLabels 0.07
220 TestMultiNode/serial/ProfileList 0.23
221 TestMultiNode/serial/CopyFile 7.59
222 TestMultiNode/serial/StopNode 2.31
223 TestMultiNode/serial/StartAfterStop 39.41
225 TestMultiNode/serial/DeleteNode 2.4
227 TestMultiNode/serial/RestartMultiNode 183.74
228 TestMultiNode/serial/ValidateNameConflict 48.81
235 TestScheduledStopUnix 111.6
239 TestRunningBinaryUpgrade 133.34
244 TestPause/serial/Start 107.76
245 TestStoppedBinaryUpgrade/Setup 0.95
246 TestStoppedBinaryUpgrade/Upgrade 172.77
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
258 TestNoKubernetes/serial/StartWithK8s 58.34
270 TestNoKubernetes/serial/StartWithStopK8s 55.95
271 TestNoKubernetes/serial/Start 42.98
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
273 TestNoKubernetes/serial/ProfileList 1.19
274 TestNoKubernetes/serial/Stop 1.31
275 TestNoKubernetes/serial/StartNoArgs 47.34
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
x
+
TestDownloadOnly/v1.20.0/json-events (8.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-304498 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-304498 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.809755711s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-304498
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-304498: exit status 85 (61.268018ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-304498 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |          |
	|         | -p download-only-304498        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:56:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:56:11.211462 1179412 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:56:11.211750 1179412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:11.211760 1179412 out.go:304] Setting ErrFile to fd 2...
	I0731 21:56:11.211766 1179412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:11.211997 1179412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	W0731 21:56:11.212166 1179412 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19312-1172186/.minikube/config/config.json: open /home/jenkins/minikube-integration/19312-1172186/.minikube/config/config.json: no such file or directory
	I0731 21:56:11.212792 1179412 out.go:298] Setting JSON to true
	I0731 21:56:11.213847 1179412 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":20322,"bootTime":1722442649,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:56:11.213915 1179412 start.go:139] virtualization: kvm guest
	I0731 21:56:11.216579 1179412 out.go:97] [download-only-304498] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0731 21:56:11.216739 1179412 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball: no such file or directory
	I0731 21:56:11.216812 1179412 notify.go:220] Checking for updates...
	I0731 21:56:11.218253 1179412 out.go:169] MINIKUBE_LOCATION=19312
	I0731 21:56:11.219680 1179412 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:56:11.221253 1179412 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 21:56:11.222720 1179412 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 21:56:11.223995 1179412 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 21:56:11.226258 1179412 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 21:56:11.226568 1179412 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:56:11.262828 1179412 out.go:97] Using the kvm2 driver based on user configuration
	I0731 21:56:11.262884 1179412 start.go:297] selected driver: kvm2
	I0731 21:56:11.262893 1179412 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:56:11.263313 1179412 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:56:11.263473 1179412 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:56:11.280340 1179412 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:56:11.280429 1179412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:56:11.280972 1179412 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 21:56:11.281186 1179412 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:56:11.281221 1179412 cni.go:84] Creating CNI manager for ""
	I0731 21:56:11.281235 1179412 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:56:11.281249 1179412 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:56:11.281337 1179412 start.go:340] cluster config:
	{Name:download-only-304498 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-304498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:56:11.281543 1179412 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:56:11.283521 1179412 out.go:97] Downloading VM boot image ...
	I0731 21:56:11.283586 1179412 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0731 21:56:14.018358 1179412 out.go:97] Starting "download-only-304498" primary control-plane node in "download-only-304498" cluster
	I0731 21:56:14.018398 1179412 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:56:14.047873 1179412 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:56:14.047929 1179412 cache.go:56] Caching tarball of preloaded images
	I0731 21:56:14.048115 1179412 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0731 21:56:14.050118 1179412 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0731 21:56:14.050149 1179412 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:56:14.075214 1179412 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-304498 host does not exist
	  To start a cluster, run: "minikube start -p download-only-304498"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-304498
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (4.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-013353 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-013353 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.226950953s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (4.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-013353
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-013353: exit status 85 (62.216612ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-304498 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | -p download-only-304498        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| delete  | -p download-only-304498        | download-only-304498 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| start   | -o=json --download-only        | download-only-013353 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | -p download-only-013353        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:56:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:56:20.352843 1179603 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:56:20.353098 1179603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:20.353106 1179603 out.go:304] Setting ErrFile to fd 2...
	I0731 21:56:20.353111 1179603 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:20.353311 1179603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 21:56:20.353882 1179603 out.go:298] Setting JSON to true
	I0731 21:56:20.354967 1179603 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":20331,"bootTime":1722442649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:56:20.355030 1179603 start.go:139] virtualization: kvm guest
	I0731 21:56:20.357081 1179603 out.go:97] [download-only-013353] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:56:20.357203 1179603 notify.go:220] Checking for updates...
	I0731 21:56:20.358582 1179603 out.go:169] MINIKUBE_LOCATION=19312
	I0731 21:56:20.360028 1179603 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:56:20.361306 1179603 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 21:56:20.362569 1179603 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 21:56:20.363804 1179603 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-013353 host does not exist
	  To start a cluster, run: "minikube start -p download-only-013353"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-013353
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (5.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-828038 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-828038 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.313522982s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (5.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-828038
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-828038: exit status 85 (61.877714ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-304498 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | -p download-only-304498             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| delete  | -p download-only-304498             | download-only-304498 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| start   | -o=json --download-only             | download-only-013353 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | -p download-only-013353             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| delete  | -p download-only-013353             | download-only-013353 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC | 31 Jul 24 21:56 UTC |
	| start   | -o=json --download-only             | download-only-828038 | jenkins | v1.33.1 | 31 Jul 24 21:56 UTC |                     |
	|         | -p download-only-828038             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/31 21:56:24
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0731 21:56:24.911860 1179794 out.go:291] Setting OutFile to fd 1 ...
	I0731 21:56:24.912111 1179794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:24.912119 1179794 out.go:304] Setting ErrFile to fd 2...
	I0731 21:56:24.912124 1179794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 21:56:24.912318 1179794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 21:56:24.912923 1179794 out.go:298] Setting JSON to true
	I0731 21:56:24.913977 1179794 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":20336,"bootTime":1722442649,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 21:56:24.914046 1179794 start.go:139] virtualization: kvm guest
	I0731 21:56:24.916183 1179794 out.go:97] [download-only-828038] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 21:56:24.916364 1179794 notify.go:220] Checking for updates...
	I0731 21:56:24.917706 1179794 out.go:169] MINIKUBE_LOCATION=19312
	I0731 21:56:24.919292 1179794 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 21:56:24.920666 1179794 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 21:56:24.922221 1179794 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 21:56:24.923601 1179794 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0731 21:56:24.926588 1179794 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0731 21:56:24.926851 1179794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 21:56:24.959793 1179794 out.go:97] Using the kvm2 driver based on user configuration
	I0731 21:56:24.959828 1179794 start.go:297] selected driver: kvm2
	I0731 21:56:24.959834 1179794 start.go:901] validating driver "kvm2" against <nil>
	I0731 21:56:24.960312 1179794 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:56:24.960416 1179794 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19312-1172186/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0731 21:56:24.976399 1179794 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0731 21:56:24.976481 1179794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0731 21:56:24.977236 1179794 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0731 21:56:24.977460 1179794 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0731 21:56:24.977517 1179794 cni.go:84] Creating CNI manager for ""
	I0731 21:56:24.977535 1179794 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0731 21:56:24.977549 1179794 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0731 21:56:24.977637 1179794 start.go:340] cluster config:
	{Name:download-only-828038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-828038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 21:56:24.977772 1179794 iso.go:125] acquiring lock: {Name:mkfe5cf1583dc17d41e97f0cc3191176c097641f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0731 21:56:24.979374 1179794 out.go:97] Starting "download-only-828038" primary control-plane node in "download-only-828038" cluster
	I0731 21:56:24.979406 1179794 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:56:25.034315 1179794 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:56:25.034357 1179794 cache.go:56] Caching tarball of preloaded images
	I0731 21:56:25.034521 1179794 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:56:25.036333 1179794 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0731 21:56:25.036354 1179794 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:56:25.060846 1179794 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0731 21:56:28.752506 1179794 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:56:28.752608 1179794 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0731 21:56:29.477981 1179794 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0731 21:56:29.478328 1179794 profile.go:143] Saving config to /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/download-only-828038/config.json ...
	I0731 21:56:29.478357 1179794 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/download-only-828038/config.json: {Name:mk492e37af477e8f61cc156d869eabf4a7149478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0731 21:56:29.478526 1179794 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0731 21:56:29.478655 1179794 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19312-1172186/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-828038 host does not exist
	  To start a cluster, run: "minikube start -p download-only-828038"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-828038
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-164920 --alsologtostderr --binary-mirror http://127.0.0.1:37537 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-164920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-164920
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (95.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-329824 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-329824 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m34.021990825s)
helpers_test.go:175: Cleaning up "offline-crio-329824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-329824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-329824: (1.031610296s)
--- PASS: TestOffline (95.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-801478
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-801478: exit status 85 (55.949221ms)

                                                
                                                
-- stdout --
	* Profile "addons-801478" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-801478"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-801478
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-801478: exit status 85 (55.268208ms)

                                                
                                                
-- stdout --
	* Profile "addons-801478" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-801478"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestCertOptions (75.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-555856 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0731 23:34:36.768574 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-555856 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m13.510055071s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-555856 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-555856 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-555856 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-555856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-555856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-555856: (1.047939049s)
--- PASS: TestCertOptions (75.03s)

                                                
                                    
x
+
TestCertExpiration (255.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-676954 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-676954 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (44.532546215s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-676954 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-676954 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.456096296s)
helpers_test.go:175: Cleaning up "cert-expiration-676954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-676954
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-676954: (1.054751261s)
--- PASS: TestCertExpiration (255.04s)

                                                
                                    
x
+
TestForceSystemdFlag (71.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-351616 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-351616 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.327848798s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-351616 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-351616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-351616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-351616: (1.028582823s)
--- PASS: TestForceSystemdFlag (71.56s)

                                                
                                    
x
+
TestForceSystemdEnv (47.56s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-081325 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-081325 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.530492127s)
helpers_test.go:175: Cleaning up "force-systemd-env-081325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-081325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-081325: (1.031121355s)
--- PASS: TestForceSystemdEnv (47.56s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.77s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.77s)

                                                
                                    
x
+
TestErrorSpam/setup (38.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-670633 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-670633 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-670633 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-670633 --driver=kvm2  --container-runtime=crio: (38.677672523s)
--- PASS: TestErrorSpam/setup (38.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 status
--- PASS: TestErrorSpam/status (0.76s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (4.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 stop: (1.573722196s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 stop: (1.376716938s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-670633 --log_dir /tmp/nospam-670633 stop: (1.316209695s)
--- PASS: TestErrorSpam/stop (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19312-1172186/.minikube/files/etc/test/nested/copy/1179400/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-754682 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-754682 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m4.289893909s)
--- PASS: TestFunctional/serial/StartWithProxy (64.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.61s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-754682 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-754682 --alsologtostderr -v=8: (39.611745695s)
functional_test.go:663: soft start took 39.61280004s for "functional-754682" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.61s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-754682 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 cache add registry.k8s.io/pause:3.1: (1.109090503s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 cache add registry.k8s.io/pause:3.3: (1.286741675s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 cache add registry.k8s.io/pause:latest: (1.16758884s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-754682 /tmp/TestFunctionalserialCacheCmdcacheadd_local3692879737/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cache add minikube-local-cache-test:functional-754682
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 cache add minikube-local-cache-test:functional-754682: (1.695647232s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cache delete minikube-local-cache-test:functional-754682
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-754682
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (221.60427ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 cache reload: (1.045191021s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 kubectl -- --context functional-754682 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-754682 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-754682 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-754682 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.790268917s)
functional_test.go:761: restart took 31.790417297s for "functional-754682" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-754682 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 logs: (1.356558234s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 logs --file /tmp/TestFunctionalserialLogsFileCmd412049632/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 logs --file /tmp/TestFunctionalserialLogsFileCmd412049632/001/logs.txt: (1.380129443s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-754682 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-754682
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-754682: exit status 115 (284.273563ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.54:32448 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-754682 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-754682 delete -f testdata/invalidsvc.yaml: (2.515702258s)
--- PASS: TestFunctional/serial/InvalidService (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 config get cpus: exit status 14 (47.836454ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 config get cpus: exit status 14 (46.711865ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-754682 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-754682 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1193898: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-754682 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-754682 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (138.402012ms)

                                                
                                                
-- stdout --
	* [functional-754682] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:40:27.729373 1193617 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:40:27.729510 1193617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:27.729521 1193617 out.go:304] Setting ErrFile to fd 2...
	I0731 22:40:27.729526 1193617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:27.729701 1193617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:40:27.730235 1193617 out.go:298] Setting JSON to false
	I0731 22:40:27.731426 1193617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":22979,"bootTime":1722442649,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 22:40:27.731495 1193617 start.go:139] virtualization: kvm guest
	I0731 22:40:27.733626 1193617 out.go:177] * [functional-754682] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0731 22:40:27.734964 1193617 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:40:27.734966 1193617 notify.go:220] Checking for updates...
	I0731 22:40:27.737234 1193617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:40:27.738566 1193617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:40:27.739872 1193617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:27.740962 1193617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 22:40:27.742317 1193617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:40:27.743918 1193617 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:40:27.744374 1193617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:40:27.744468 1193617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:40:27.760319 1193617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39063
	I0731 22:40:27.760857 1193617 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:40:27.761582 1193617 main.go:141] libmachine: Using API Version  1
	I0731 22:40:27.761613 1193617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:40:27.761973 1193617 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:40:27.762181 1193617 main.go:141] libmachine: (functional-754682) Calling .DriverName
	I0731 22:40:27.762457 1193617 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:40:27.762814 1193617 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:40:27.762858 1193617 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:40:27.778323 1193617 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34197
	I0731 22:40:27.778751 1193617 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:40:27.779220 1193617 main.go:141] libmachine: Using API Version  1
	I0731 22:40:27.779242 1193617 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:40:27.779519 1193617 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:40:27.779766 1193617 main.go:141] libmachine: (functional-754682) Calling .DriverName
	I0731 22:40:27.815163 1193617 out.go:177] * Using the kvm2 driver based on existing profile
	I0731 22:40:27.816312 1193617 start.go:297] selected driver: kvm2
	I0731 22:40:27.816327 1193617 start.go:901] validating driver "kvm2" against &{Name:functional-754682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-754682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:40:27.816474 1193617 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:40:27.818599 1193617 out.go:177] 
	W0731 22:40:27.819710 1193617 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0731 22:40:27.820952 1193617 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-754682 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-754682 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-754682 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.246293ms)

                                                
                                                
-- stdout --
	* [functional-754682] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 22:40:26.820803 1193500 out.go:291] Setting OutFile to fd 1 ...
	I0731 22:40:26.820929 1193500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:26.820938 1193500 out.go:304] Setting ErrFile to fd 2...
	I0731 22:40:26.820944 1193500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 22:40:26.821253 1193500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 22:40:26.821797 1193500 out.go:298] Setting JSON to false
	I0731 22:40:26.822937 1193500 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":22978,"bootTime":1722442649,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0731 22:40:26.823005 1193500 start.go:139] virtualization: kvm guest
	I0731 22:40:26.825181 1193500 out.go:177] * [functional-754682] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0731 22:40:26.826554 1193500 out.go:177]   - MINIKUBE_LOCATION=19312
	I0731 22:40:26.826582 1193500 notify.go:220] Checking for updates...
	I0731 22:40:26.828875 1193500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0731 22:40:26.829954 1193500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	I0731 22:40:26.831082 1193500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	I0731 22:40:26.832223 1193500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0731 22:40:26.833295 1193500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0731 22:40:26.834822 1193500 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 22:40:26.835264 1193500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:40:26.835359 1193500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:40:26.851335 1193500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0731 22:40:26.851794 1193500 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:40:26.852359 1193500 main.go:141] libmachine: Using API Version  1
	I0731 22:40:26.852380 1193500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:40:26.852800 1193500 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:40:26.852989 1193500 main.go:141] libmachine: (functional-754682) Calling .DriverName
	I0731 22:40:26.853265 1193500 driver.go:392] Setting default libvirt URI to qemu:///system
	I0731 22:40:26.853609 1193500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 22:40:26.853667 1193500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 22:40:26.869441 1193500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0731 22:40:26.869848 1193500 main.go:141] libmachine: () Calling .GetVersion
	I0731 22:40:26.870323 1193500 main.go:141] libmachine: Using API Version  1
	I0731 22:40:26.870347 1193500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 22:40:26.870663 1193500 main.go:141] libmachine: () Calling .GetMachineName
	I0731 22:40:26.870849 1193500 main.go:141] libmachine: (functional-754682) Calling .DriverName
	I0731 22:40:26.906801 1193500 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0731 22:40:26.908208 1193500 start.go:297] selected driver: kvm2
	I0731 22:40:26.908232 1193500 start.go:901] validating driver "kvm2" against &{Name:functional-754682 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-754682 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.54 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0731 22:40:26.908406 1193500 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0731 22:40:26.910982 1193500 out.go:177] 
	W0731 22:40:26.912249 1193500 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0731 22:40:26.913362 1193500 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-754682 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-754682 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-cn9b8" [0c85fbd3-46c2-4b04-8770-41591d75702d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-cn9b8" [0c85fbd3-46c2-4b04-8770-41591d75702d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.004221457s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.54:31612
functional_test.go:1675: http://192.168.39.54:31612: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-cn9b8

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.54:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.54:31612
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [5042bf9b-c708-48aa-8865-e02ae677e5c8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01696726s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-754682 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-754682 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-754682 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-754682 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d3510ea9-6cf3-4495-8cf6-c62f940afbe5] Pending
helpers_test.go:344: "sp-pod" [d3510ea9-6cf3-4495-8cf6-c62f940afbe5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d3510ea9-6cf3-4495-8cf6-c62f940afbe5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003460078s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-754682 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-754682 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-754682 delete -f testdata/storage-provisioner/pod.yaml: (1.321409159s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-754682 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9f51e5d4-d780-4330-8c57-3dcac259ca03] Pending
helpers_test.go:344: "sp-pod" [9f51e5d4-d780-4330-8c57-3dcac259ca03] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9f51e5d4-d780-4330-8c57-3dcac259ca03] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004033197s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-754682 exec sp-pod -- ls /tmp/mount
2024/07/31 22:40:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.48s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh -n functional-754682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cp functional-754682:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2717410892/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh -n functional-754682 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh -n functional-754682 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-754682 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-dz6m4" [8ada864a-e073-41e5-92d0-1de0cc7579aa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-dz6m4" [8ada864a-e073-41e5-92d0-1de0cc7579aa] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.011127869s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-754682 exec mysql-64454c8b5c-dz6m4 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1179400/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /etc/test/nested/copy/1179400/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1179400.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /etc/ssl/certs/1179400.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1179400.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /usr/share/ca-certificates/1179400.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11794002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /etc/ssl/certs/11794002.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11794002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /usr/share/ca-certificates/11794002.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-754682 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh "sudo systemctl is-active docker": exit status 1 (202.721975ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh "sudo systemctl is-active containerd": exit status 1 (204.912624ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-754682 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-754682
localhost/kicbase/echo-server:functional-754682
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-754682 image ls --format short --alsologtostderr:
I0731 22:40:29.375287 1193858 out.go:291] Setting OutFile to fd 1 ...
I0731 22:40:29.375403 1193858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:29.375411 1193858 out.go:304] Setting ErrFile to fd 2...
I0731 22:40:29.375415 1193858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:29.375590 1193858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
I0731 22:40:29.376201 1193858 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:29.376307 1193858 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:29.376758 1193858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:29.376805 1193858 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:29.393344 1193858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42825
I0731 22:40:29.393950 1193858 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:29.394564 1193858 main.go:141] libmachine: Using API Version  1
I0731 22:40:29.394584 1193858 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:29.394956 1193858 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:29.395211 1193858 main.go:141] libmachine: (functional-754682) Calling .GetState
I0731 22:40:29.397477 1193858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:29.397536 1193858 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:29.414073 1193858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
I0731 22:40:29.414586 1193858 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:29.415201 1193858 main.go:141] libmachine: Using API Version  1
I0731 22:40:29.415246 1193858 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:29.415624 1193858 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:29.415920 1193858 main.go:141] libmachine: (functional-754682) Calling .DriverName
I0731 22:40:29.416179 1193858 ssh_runner.go:195] Run: systemctl --version
I0731 22:40:29.416213 1193858 main.go:141] libmachine: (functional-754682) Calling .GetSSHHostname
I0731 22:40:29.419583 1193858 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:29.420060 1193858 main.go:141] libmachine: (functional-754682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:db:c2", ip: ""} in network mk-functional-754682: {Iface:virbr1 ExpiryTime:2024-07-31 23:37:34 +0000 UTC Type:0 Mac:52:54:00:57:db:c2 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-754682 Clientid:01:52:54:00:57:db:c2}
I0731 22:40:29.420111 1193858 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined IP address 192.168.39.54 and MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:29.420311 1193858 main.go:141] libmachine: (functional-754682) Calling .GetSSHPort
I0731 22:40:29.420508 1193858 main.go:141] libmachine: (functional-754682) Calling .GetSSHKeyPath
I0731 22:40:29.420708 1193858 main.go:141] libmachine: (functional-754682) Calling .GetSSHUsername
I0731 22:40:29.420876 1193858 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/functional-754682/id_rsa Username:docker}
I0731 22:40:29.522576 1193858 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 22:40:29.582814 1193858 main.go:141] libmachine: Making call to close driver server
I0731 22:40:29.582833 1193858 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:29.583141 1193858 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:29.583161 1193858 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:29.583173 1193858 main.go:141] libmachine: Making call to close driver server
I0731 22:40:29.583174 1193858 main.go:141] libmachine: (functional-754682) DBG | Closing plugin on server side
I0731 22:40:29.583180 1193858 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:29.583404 1193858 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:29.583420 1193858 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:29.583435 1193858 main.go:141] libmachine: (functional-754682) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-754682 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | alpine             | 1ae23480369fa | 45.1MB |
| localhost/kicbase/echo-server           | functional-754682  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-754682  | 73444150ae83b | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-754682 image ls --format table --alsologtostderr:
I0731 22:40:30.153061 1193980 out.go:291] Setting OutFile to fd 1 ...
I0731 22:40:30.153208 1193980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:30.153232 1193980 out.go:304] Setting ErrFile to fd 2...
I0731 22:40:30.153245 1193980 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:30.153453 1193980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
I0731 22:40:30.154066 1193980 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:30.154192 1193980 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:30.154648 1193980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:30.154707 1193980 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:30.170667 1193980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41699
I0731 22:40:30.171180 1193980 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:30.171836 1193980 main.go:141] libmachine: Using API Version  1
I0731 22:40:30.171862 1193980 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:30.172288 1193980 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:30.172530 1193980 main.go:141] libmachine: (functional-754682) Calling .GetState
I0731 22:40:30.174491 1193980 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:30.174538 1193980 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:30.190087 1193980 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33227
I0731 22:40:30.190618 1193980 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:30.191089 1193980 main.go:141] libmachine: Using API Version  1
I0731 22:40:30.191115 1193980 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:30.191502 1193980 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:30.191789 1193980 main.go:141] libmachine: (functional-754682) Calling .DriverName
I0731 22:40:30.192017 1193980 ssh_runner.go:195] Run: systemctl --version
I0731 22:40:30.192044 1193980 main.go:141] libmachine: (functional-754682) Calling .GetSSHHostname
I0731 22:40:30.195027 1193980 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:30.195456 1193980 main.go:141] libmachine: (functional-754682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:db:c2", ip: ""} in network mk-functional-754682: {Iface:virbr1 ExpiryTime:2024-07-31 23:37:34 +0000 UTC Type:0 Mac:52:54:00:57:db:c2 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-754682 Clientid:01:52:54:00:57:db:c2}
I0731 22:40:30.195492 1193980 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined IP address 192.168.39.54 and MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:30.195622 1193980 main.go:141] libmachine: (functional-754682) Calling .GetSSHPort
I0731 22:40:30.195817 1193980 main.go:141] libmachine: (functional-754682) Calling .GetSSHKeyPath
I0731 22:40:30.195965 1193980 main.go:141] libmachine: (functional-754682) Calling .GetSSHUsername
I0731 22:40:30.196135 1193980 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/functional-754682/id_rsa Username:docker}
I0731 22:40:30.278705 1193980 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 22:40:30.317865 1193980 main.go:141] libmachine: Making call to close driver server
I0731 22:40:30.317882 1193980 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:30.318213 1193980 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:30.318292 1193980 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:30.318314 1193980 main.go:141] libmachine: Making call to close driver server
I0731 22:40:30.318323 1193980 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:30.318253 1193980 main.go:141] libmachine: (functional-754682) DBG | Closing plugin on server side
I0731 22:40:30.318638 1193980 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:30.318662 1193980 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:30.318679 1193980 main.go:141] libmachine: (functional-754682) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-754682 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73444150ae83b8ac0ea6d6bfd9ec9a15e8f9fa359aa179d15792c5fb12e
d9299","repoDigests":["localhost/minikube-local-cache-test@sha256:01ed04051b8be2c7ce2c8ebd7b13a57e12348dc00d43d1440788e80e8567b02e"],"repoTags":["localhost/minikube-local-cache-test:functional-754682"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f55
11bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["r
egistry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"63051080"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"1f6d574d502f3b61
c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"
,"docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45068794"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-754682"],"size":"4943877"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registr
y.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-754682 image ls --format json --alsologtostderr:
I0731 22:40:29.902026 1193937 out.go:291] Setting OutFile to fd 1 ...
I0731 22:40:29.902126 1193937 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:29.902130 1193937 out.go:304] Setting ErrFile to fd 2...
I0731 22:40:29.902134 1193937 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:29.902332 1193937 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
I0731 22:40:29.902881 1193937 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:29.902982 1193937 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:29.903338 1193937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:29.903380 1193937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:29.919894 1193937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46859
I0731 22:40:29.920470 1193937 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:29.921263 1193937 main.go:141] libmachine: Using API Version  1
I0731 22:40:29.921292 1193937 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:29.921930 1193937 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:29.922138 1193937 main.go:141] libmachine: (functional-754682) Calling .GetState
I0731 22:40:29.924487 1193937 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:29.924537 1193937 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:29.940497 1193937 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37531
I0731 22:40:29.941000 1193937 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:29.941573 1193937 main.go:141] libmachine: Using API Version  1
I0731 22:40:29.941606 1193937 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:29.941909 1193937 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:29.942107 1193937 main.go:141] libmachine: (functional-754682) Calling .DriverName
I0731 22:40:29.942300 1193937 ssh_runner.go:195] Run: systemctl --version
I0731 22:40:29.942323 1193937 main.go:141] libmachine: (functional-754682) Calling .GetSSHHostname
I0731 22:40:29.945605 1193937 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:29.945977 1193937 main.go:141] libmachine: (functional-754682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:db:c2", ip: ""} in network mk-functional-754682: {Iface:virbr1 ExpiryTime:2024-07-31 23:37:34 +0000 UTC Type:0 Mac:52:54:00:57:db:c2 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-754682 Clientid:01:52:54:00:57:db:c2}
I0731 22:40:29.946014 1193937 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined IP address 192.168.39.54 and MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:29.946223 1193937 main.go:141] libmachine: (functional-754682) Calling .GetSSHPort
I0731 22:40:29.946411 1193937 main.go:141] libmachine: (functional-754682) Calling .GetSSHKeyPath
I0731 22:40:29.946564 1193937 main.go:141] libmachine: (functional-754682) Calling .GetSSHUsername
I0731 22:40:29.946682 1193937 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/functional-754682/id_rsa Username:docker}
I0731 22:40:30.039198 1193937 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 22:40:30.097890 1193937 main.go:141] libmachine: Making call to close driver server
I0731 22:40:30.097907 1193937 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:30.098231 1193937 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:30.098248 1193937 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:30.098277 1193937 main.go:141] libmachine: Making call to close driver server
I0731 22:40:30.098281 1193937 main.go:141] libmachine: (functional-754682) DBG | Closing plugin on server side
I0731 22:40:30.098293 1193937 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:30.098556 1193937 main.go:141] libmachine: (functional-754682) DBG | Closing plugin on server side
I0731 22:40:30.098597 1193937 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:30.098618 1193937 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-754682 image ls --format yaml --alsologtostderr:
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-754682
size: "4943877"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
- docker.io/library/nginx@sha256:a377278b7dde3a8012b25d141d025a88dbf9f5ed13c5cdf21ee241e7ec07ab57
repoTags:
- docker.io/library/nginx:alpine
size: "45068794"
- id: 73444150ae83b8ac0ea6d6bfd9ec9a15e8f9fa359aa179d15792c5fb12ed9299
repoDigests:
- localhost/minikube-local-cache-test@sha256:01ed04051b8be2c7ce2c8ebd7b13a57e12348dc00d43d1440788e80e8567b02e
repoTags:
- localhost/minikube-local-cache-test:functional-754682
size: "3330"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-754682 image ls --format yaml --alsologtostderr:
I0731 22:40:29.638562 1193905 out.go:291] Setting OutFile to fd 1 ...
I0731 22:40:29.638689 1193905 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:29.638699 1193905 out.go:304] Setting ErrFile to fd 2...
I0731 22:40:29.638706 1193905 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:29.638925 1193905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
I0731 22:40:29.639606 1193905 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:29.639744 1193905 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:29.640170 1193905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:29.640232 1193905 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:29.657391 1193905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36475
I0731 22:40:29.657884 1193905 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:29.658517 1193905 main.go:141] libmachine: Using API Version  1
I0731 22:40:29.658542 1193905 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:29.658927 1193905 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:29.659159 1193905 main.go:141] libmachine: (functional-754682) Calling .GetState
I0731 22:40:29.661047 1193905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:29.661093 1193905 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:29.677368 1193905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36185
I0731 22:40:29.677882 1193905 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:29.678392 1193905 main.go:141] libmachine: Using API Version  1
I0731 22:40:29.678415 1193905 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:29.678752 1193905 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:29.678955 1193905 main.go:141] libmachine: (functional-754682) Calling .DriverName
I0731 22:40:29.679165 1193905 ssh_runner.go:195] Run: systemctl --version
I0731 22:40:29.679191 1193905 main.go:141] libmachine: (functional-754682) Calling .GetSSHHostname
I0731 22:40:29.682241 1193905 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:29.682731 1193905 main.go:141] libmachine: (functional-754682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:db:c2", ip: ""} in network mk-functional-754682: {Iface:virbr1 ExpiryTime:2024-07-31 23:37:34 +0000 UTC Type:0 Mac:52:54:00:57:db:c2 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-754682 Clientid:01:52:54:00:57:db:c2}
I0731 22:40:29.682767 1193905 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined IP address 192.168.39.54 and MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:29.683022 1193905 main.go:141] libmachine: (functional-754682) Calling .GetSSHPort
I0731 22:40:29.683170 1193905 main.go:141] libmachine: (functional-754682) Calling .GetSSHKeyPath
I0731 22:40:29.683302 1193905 main.go:141] libmachine: (functional-754682) Calling .GetSSHUsername
I0731 22:40:29.683438 1193905 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/functional-754682/id_rsa Username:docker}
I0731 22:40:29.795644 1193905 ssh_runner.go:195] Run: sudo crictl images --output json
I0731 22:40:29.846646 1193905 main.go:141] libmachine: Making call to close driver server
I0731 22:40:29.846665 1193905 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:29.846983 1193905 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:29.847039 1193905 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:29.847062 1193905 main.go:141] libmachine: Making call to close driver server
I0731 22:40:29.847158 1193905 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:29.847400 1193905 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:29.847416 1193905 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh pgrep buildkitd: exit status 1 (195.345231ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image build -t localhost/my-image:functional-754682 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 image build -t localhost/my-image:functional-754682 testdata/build --alsologtostderr: (2.663630055s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-754682 image build -t localhost/my-image:functional-754682 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 66c3c89d49a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-754682
--> b3e1d92d356
Successfully tagged localhost/my-image:functional-754682
b3e1d92d356ca2612b5efa75d6882e7613ab36bdb32fde6979a720c5101d4f52
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-754682 image build -t localhost/my-image:functional-754682 testdata/build --alsologtostderr:
I0731 22:40:30.565539 1194038 out.go:291] Setting OutFile to fd 1 ...
I0731 22:40:30.565685 1194038 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:30.565693 1194038 out.go:304] Setting ErrFile to fd 2...
I0731 22:40:30.565697 1194038 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0731 22:40:30.566252 1194038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
I0731 22:40:30.567607 1194038 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:30.568218 1194038 config.go:182] Loaded profile config "functional-754682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0731 22:40:30.568592 1194038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:30.568645 1194038 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:30.584256 1194038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37987
I0731 22:40:30.584773 1194038 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:30.585353 1194038 main.go:141] libmachine: Using API Version  1
I0731 22:40:30.585384 1194038 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:30.585742 1194038 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:30.585962 1194038 main.go:141] libmachine: (functional-754682) Calling .GetState
I0731 22:40:30.587965 1194038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0731 22:40:30.588018 1194038 main.go:141] libmachine: Launching plugin server for driver kvm2
I0731 22:40:30.604753 1194038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34473
I0731 22:40:30.605251 1194038 main.go:141] libmachine: () Calling .GetVersion
I0731 22:40:30.605831 1194038 main.go:141] libmachine: Using API Version  1
I0731 22:40:30.605863 1194038 main.go:141] libmachine: () Calling .SetConfigRaw
I0731 22:40:30.606184 1194038 main.go:141] libmachine: () Calling .GetMachineName
I0731 22:40:30.606408 1194038 main.go:141] libmachine: (functional-754682) Calling .DriverName
I0731 22:40:30.606676 1194038 ssh_runner.go:195] Run: systemctl --version
I0731 22:40:30.606713 1194038 main.go:141] libmachine: (functional-754682) Calling .GetSSHHostname
I0731 22:40:30.609262 1194038 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:30.609729 1194038 main.go:141] libmachine: (functional-754682) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:db:c2", ip: ""} in network mk-functional-754682: {Iface:virbr1 ExpiryTime:2024-07-31 23:37:34 +0000 UTC Type:0 Mac:52:54:00:57:db:c2 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:functional-754682 Clientid:01:52:54:00:57:db:c2}
I0731 22:40:30.609762 1194038 main.go:141] libmachine: (functional-754682) DBG | domain functional-754682 has defined IP address 192.168.39.54 and MAC address 52:54:00:57:db:c2 in network mk-functional-754682
I0731 22:40:30.609892 1194038 main.go:141] libmachine: (functional-754682) Calling .GetSSHPort
I0731 22:40:30.610077 1194038 main.go:141] libmachine: (functional-754682) Calling .GetSSHKeyPath
I0731 22:40:30.610239 1194038 main.go:141] libmachine: (functional-754682) Calling .GetSSHUsername
I0731 22:40:30.610447 1194038 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/functional-754682/id_rsa Username:docker}
I0731 22:40:30.694327 1194038 build_images.go:161] Building image from path: /tmp/build.3026287136.tar
I0731 22:40:30.694403 1194038 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0731 22:40:30.705072 1194038 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3026287136.tar
I0731 22:40:30.709869 1194038 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3026287136.tar: stat -c "%s %y" /var/lib/minikube/build/build.3026287136.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3026287136.tar': No such file or directory
I0731 22:40:30.709913 1194038 ssh_runner.go:362] scp /tmp/build.3026287136.tar --> /var/lib/minikube/build/build.3026287136.tar (3072 bytes)
I0731 22:40:30.740877 1194038 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3026287136
I0731 22:40:30.751382 1194038 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3026287136 -xf /var/lib/minikube/build/build.3026287136.tar
I0731 22:40:30.761431 1194038 crio.go:315] Building image: /var/lib/minikube/build/build.3026287136
I0731 22:40:30.761577 1194038 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-754682 /var/lib/minikube/build/build.3026287136 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0731 22:40:33.144464 1194038 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-754682 /var/lib/minikube/build/build.3026287136 --cgroup-manager=cgroupfs: (2.382849736s)
I0731 22:40:33.144536 1194038 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3026287136
I0731 22:40:33.163505 1194038 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3026287136.tar
I0731 22:40:33.175822 1194038 build_images.go:217] Built localhost/my-image:functional-754682 from /tmp/build.3026287136.tar
I0731 22:40:33.175866 1194038 build_images.go:133] succeeded building to: functional-754682
I0731 22:40:33.175873 1194038 build_images.go:134] failed building to: 
I0731 22:40:33.175903 1194038 main.go:141] libmachine: Making call to close driver server
I0731 22:40:33.175920 1194038 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:33.176223 1194038 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:33.176245 1194038 main.go:141] libmachine: Making call to close connection to plugin binary
I0731 22:40:33.176257 1194038 main.go:141] libmachine: Making call to close driver server
I0731 22:40:33.176265 1194038 main.go:141] libmachine: (functional-754682) Calling .Close
I0731 22:40:33.176553 1194038 main.go:141] libmachine: Successfully made call to close driver server
I0731 22:40:33.176572 1194038 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.498094947s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-754682
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image load --daemon kicbase/echo-server:functional-754682 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 image load --daemon kicbase/echo-server:functional-754682 --alsologtostderr: (1.233248988s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-754682 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-754682 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-754682 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1192006: os: process already finished
helpers_test.go:502: unable to terminate pid 1192018: os: process already finished
helpers_test.go:502: unable to terminate pid 1192030: os: process already finished
helpers_test.go:508: unable to kill pid 1191984: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-754682 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-754682 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (21.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-754682 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4aa4ba9d-059c-4ebb-84e2-7e279c73bf95] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4aa4ba9d-059c-4ebb-84e2-7e279c73bf95] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 21.004311793s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (21.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image load --daemon kicbase/echo-server:functional-754682 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-754682
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image load --daemon kicbase/echo-server:functional-754682 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 image load --daemon kicbase/echo-server:functional-754682 --alsologtostderr: (3.107614736s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image save kicbase/echo-server:functional-754682 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 image save kicbase/echo-server:functional-754682 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.320403063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image rm kicbase/echo-server:functional-754682 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-754682
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 image save --daemon kicbase/echo-server:functional-754682 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 image save --daemon kicbase/echo-server:functional-754682 --alsologtostderr: (1.134312447s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-754682
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdany-port1749072076/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722465608672751864" to /tmp/TestFunctionalparallelMountCmdany-port1749072076/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722465608672751864" to /tmp/TestFunctionalparallelMountCmdany-port1749072076/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722465608672751864" to /tmp/TestFunctionalparallelMountCmdany-port1749072076/001/test-1722465608672751864
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (221.223308ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 31 22:40 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 31 22:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 31 22:40 test-1722465608672751864
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh cat /mount-9p/test-1722465608672751864
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-754682 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7523dbea-05e8-4a74-8619-7ac88e5ad533] Pending
helpers_test.go:344: "busybox-mount" [7523dbea-05e8-4a74-8619-7ac88e5ad533] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7523dbea-05e8-4a74-8619-7ac88e5ad533] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7523dbea-05e8-4a74-8619-7ac88e5ad533] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.00425389s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-754682 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdany-port1749072076/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-754682 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.151.66 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-754682 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-754682 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-754682 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-nmpwp" [8a9d5af7-83cb-49f3-8a8b-070b9974551d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-nmpwp" [8a9d5af7-83cb-49f3-8a8b-070b9974551d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003804658s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdspecific-port2003885940/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (192.576898ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdspecific-port2003885940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh "sudo umount -f /mount-9p": exit status 1 (193.060757ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-754682 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdspecific-port2003885940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2035100548/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2035100548/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2035100548/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T" /mount1: exit status 1 (255.376139ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-754682 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2035100548/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2035100548/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-754682 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2035100548/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "258.636338ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.074398ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "227.29992ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "47.323271ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-754682 service list -o json: (1.693079296s)
functional_test.go:1494: Took "1.693208808s" to run "out/minikube-linux-amd64 -p functional-754682 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.54:30347
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-754682 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.54:30347
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-754682
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-754682
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-754682
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150891 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-150891 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m29.027640143s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (209.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-150891 -- rollout status deployment/busybox: (4.44364348s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-98526 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-cwsjc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-gzb99 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-98526 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-cwsjc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-gzb99 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-98526 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-cwsjc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-gzb99 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-98526 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-98526 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-cwsjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-cwsjc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-gzb99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-150891 -- exec busybox-fc5497c4f-gzb99 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-150891 -v=7 --alsologtostderr
E0731 22:44:53.720002 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:53.725999 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:53.736390 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:53.756719 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:53.797125 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:53.877514 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:54.038177 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:54.358833 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:54.999116 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:56.279431 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:44:58.840352 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 22:45:03.960911 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-150891 -v=7 --alsologtostderr: (53.319503096s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-150891 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp testdata/cp-test.txt ha-150891:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891.txt
E0731 22:45:14.201998 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891:/home/docker/cp-test.txt ha-150891-m02:/home/docker/cp-test_ha-150891_ha-150891-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test_ha-150891_ha-150891-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891:/home/docker/cp-test.txt ha-150891-m03:/home/docker/cp-test_ha-150891_ha-150891-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test_ha-150891_ha-150891-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891:/home/docker/cp-test.txt ha-150891-m04:/home/docker/cp-test_ha-150891_ha-150891-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test_ha-150891_ha-150891-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp testdata/cp-test.txt ha-150891-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m02:/home/docker/cp-test.txt ha-150891:/home/docker/cp-test_ha-150891-m02_ha-150891.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test_ha-150891-m02_ha-150891.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m02:/home/docker/cp-test.txt ha-150891-m03:/home/docker/cp-test_ha-150891-m02_ha-150891-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test_ha-150891-m02_ha-150891-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m02:/home/docker/cp-test.txt ha-150891-m04:/home/docker/cp-test_ha-150891-m02_ha-150891-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test_ha-150891-m02_ha-150891-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp testdata/cp-test.txt ha-150891-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt ha-150891:/home/docker/cp-test_ha-150891-m03_ha-150891.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test_ha-150891-m03_ha-150891.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt ha-150891-m02:/home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test_ha-150891-m03_ha-150891-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m03:/home/docker/cp-test.txt ha-150891-m04:/home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test_ha-150891-m03_ha-150891-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp testdata/cp-test.txt ha-150891-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3873107821/001/cp-test_ha-150891-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt ha-150891:/home/docker/cp-test_ha-150891-m04_ha-150891.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891 "sudo cat /home/docker/cp-test_ha-150891-m04_ha-150891.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt ha-150891-m02:/home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m02 "sudo cat /home/docker/cp-test_ha-150891-m04_ha-150891-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 cp ha-150891-m04:/home/docker/cp-test.txt ha-150891-m03:/home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 ssh -n ha-150891-m03 "sudo cat /home/docker/cp-test_ha-150891-m04_ha-150891-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.480170898s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-150891 node delete m03 -v=7 --alsologtostderr: (16.418788155s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (354.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-150891 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 22:59:53.722341 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
E0731 23:01:16.767296 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-150891 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m53.473898474s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (354.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-150891 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-150891 --control-plane -v=7 --alsologtostderr: (1m15.333513767s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-150891 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-255853 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-255853 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (57.522791611s)
--- PASS: TestJSONOutput/start/Command (57.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-255853 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-255853 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-255853 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-255853 --output=json --user=testUser: (7.376092426s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-601321 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-601321 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.929327ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"da99ca6f-34ab-4f6e-a572-9882c3cd8830","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-601321] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b6ca15d-ee98-4426-85bd-4f87dc06ae79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19312"}}
	{"specversion":"1.0","id":"d076b67a-ada3-42eb-b9a4-3d385cd86992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"003b4a6f-af7f-4607-9993-5846a4509be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig"}}
	{"specversion":"1.0","id":"44309482-6f77-4fb1-82b3-15b14880ed35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube"}}
	{"specversion":"1.0","id":"c3bb7698-5337-42d6-ae46-dd5d1ead302f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"96e6659d-c318-4656-b789-4fda575ba14a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89f49197-ff57-41d6-a6b7-06edef393de9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-601321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-601321
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (90.86s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-862171 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-862171 --driver=kvm2  --container-runtime=crio: (43.764646001s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-865244 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-865244 --driver=kvm2  --container-runtime=crio: (44.362639375s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-862171
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-865244
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-865244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-865244
helpers_test.go:175: Cleaning up "first-862171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-862171
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-862171: (1.036471146s)
--- PASS: TestMinikubeProfile (90.86s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-290641 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-290641 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.818545208s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-290641 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-290641 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (24.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-306584 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-306584 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.351334551s)
--- PASS: TestMountStart/serial/StartWithMountSecond (24.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-306584 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-306584 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.93s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-290641 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-306584 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-306584 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-306584
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-306584: (1.287199338s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-306584
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-306584: (21.434169761s)
--- PASS: TestMountStart/serial/RestartStopped (22.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-306584 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-306584 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615814 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0731 23:09:53.720725 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615814 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m5.01232217s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-615814 -- rollout status deployment/busybox: (3.209980722s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-csqxw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-jtg8z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-csqxw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-jtg8z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-csqxw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-jtg8z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-csqxw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-csqxw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-jtg8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-615814 -- exec busybox-fc5497c4f-jtg8z -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.83s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-615814 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-615814 -v 3 --alsologtostderr: (50.124368441s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.73s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-615814 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp testdata/cp-test.txt multinode-615814:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4241457848/001/cp-test_multinode-615814.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814:/home/docker/cp-test.txt multinode-615814-m02:/home/docker/cp-test_multinode-615814_multinode-615814-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m02 "sudo cat /home/docker/cp-test_multinode-615814_multinode-615814-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814:/home/docker/cp-test.txt multinode-615814-m03:/home/docker/cp-test_multinode-615814_multinode-615814-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m03 "sudo cat /home/docker/cp-test_multinode-615814_multinode-615814-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp testdata/cp-test.txt multinode-615814-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4241457848/001/cp-test_multinode-615814-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt multinode-615814:/home/docker/cp-test_multinode-615814-m02_multinode-615814.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814 "sudo cat /home/docker/cp-test_multinode-615814-m02_multinode-615814.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814-m02:/home/docker/cp-test.txt multinode-615814-m03:/home/docker/cp-test_multinode-615814-m02_multinode-615814-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m03 "sudo cat /home/docker/cp-test_multinode-615814-m02_multinode-615814-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp testdata/cp-test.txt multinode-615814-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4241457848/001/cp-test_multinode-615814-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt multinode-615814:/home/docker/cp-test_multinode-615814-m03_multinode-615814.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814 "sudo cat /home/docker/cp-test_multinode-615814-m03_multinode-615814.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 cp multinode-615814-m03:/home/docker/cp-test.txt multinode-615814-m02:/home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 ssh -n multinode-615814-m02 "sudo cat /home/docker/cp-test_multinode-615814-m03_multinode-615814-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-615814 node stop m03: (1.438163324s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615814 status: exit status 7 (434.230206ms)

                                                
                                                
-- stdout --
	multinode-615814
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-615814-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-615814-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-615814 status --alsologtostderr: exit status 7 (436.244093ms)

                                                
                                                
-- stdout --
	multinode-615814
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-615814-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-615814-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0731 23:12:03.806453 1211356 out.go:291] Setting OutFile to fd 1 ...
	I0731 23:12:03.806590 1211356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:12:03.806599 1211356 out.go:304] Setting ErrFile to fd 2...
	I0731 23:12:03.806603 1211356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0731 23:12:03.806808 1211356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19312-1172186/.minikube/bin
	I0731 23:12:03.806983 1211356 out.go:298] Setting JSON to false
	I0731 23:12:03.807012 1211356 mustload.go:65] Loading cluster: multinode-615814
	I0731 23:12:03.807069 1211356 notify.go:220] Checking for updates...
	I0731 23:12:03.807362 1211356 config.go:182] Loaded profile config "multinode-615814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0731 23:12:03.807380 1211356 status.go:255] checking status of multinode-615814 ...
	I0731 23:12:03.807743 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:03.807809 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:03.824138 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37651
	I0731 23:12:03.824630 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:03.825201 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:03.825228 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:03.825651 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:03.825874 1211356 main.go:141] libmachine: (multinode-615814) Calling .GetState
	I0731 23:12:03.827581 1211356 status.go:330] multinode-615814 host status = "Running" (err=<nil>)
	I0731 23:12:03.827611 1211356 host.go:66] Checking if "multinode-615814" exists ...
	I0731 23:12:03.827953 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:03.828001 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:03.844385 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44849
	I0731 23:12:03.844944 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:03.845471 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:03.845493 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:03.845808 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:03.845998 1211356 main.go:141] libmachine: (multinode-615814) Calling .GetIP
	I0731 23:12:03.849238 1211356 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:12:03.849739 1211356 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:12:03.849782 1211356 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:12:03.850012 1211356 host.go:66] Checking if "multinode-615814" exists ...
	I0731 23:12:03.850482 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:03.850541 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:03.870365 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44767
	I0731 23:12:03.870920 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:03.871484 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:03.871515 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:03.871966 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:03.872184 1211356 main.go:141] libmachine: (multinode-615814) Calling .DriverName
	I0731 23:12:03.872427 1211356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:12:03.872470 1211356 main.go:141] libmachine: (multinode-615814) Calling .GetSSHHostname
	I0731 23:12:03.875644 1211356 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:12:03.876108 1211356 main.go:141] libmachine: (multinode-615814) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:ee:5b", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:09:06 +0000 UTC Type:0 Mac:52:54:00:38:ee:5b Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-615814 Clientid:01:52:54:00:38:ee:5b}
	I0731 23:12:03.876145 1211356 main.go:141] libmachine: (multinode-615814) DBG | domain multinode-615814 has defined IP address 192.168.39.129 and MAC address 52:54:00:38:ee:5b in network mk-multinode-615814
	I0731 23:12:03.876459 1211356 main.go:141] libmachine: (multinode-615814) Calling .GetSSHPort
	I0731 23:12:03.876696 1211356 main.go:141] libmachine: (multinode-615814) Calling .GetSSHKeyPath
	I0731 23:12:03.876874 1211356 main.go:141] libmachine: (multinode-615814) Calling .GetSSHUsername
	I0731 23:12:03.877082 1211356 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814/id_rsa Username:docker}
	I0731 23:12:03.960166 1211356 ssh_runner.go:195] Run: systemctl --version
	I0731 23:12:03.966457 1211356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:12:03.981360 1211356 kubeconfig.go:125] found "multinode-615814" server: "https://192.168.39.129:8443"
	I0731 23:12:03.981399 1211356 api_server.go:166] Checking apiserver status ...
	I0731 23:12:03.981457 1211356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0731 23:12:03.995968 1211356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup
	W0731 23:12:04.006637 1211356 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1155/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0731 23:12:04.006713 1211356 ssh_runner.go:195] Run: ls
	I0731 23:12:04.011430 1211356 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0731 23:12:04.015802 1211356 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0731 23:12:04.015833 1211356 status.go:422] multinode-615814 apiserver status = Running (err=<nil>)
	I0731 23:12:04.015844 1211356 status.go:257] multinode-615814 status: &{Name:multinode-615814 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:12:04.015896 1211356 status.go:255] checking status of multinode-615814-m02 ...
	I0731 23:12:04.016281 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:04.016324 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:04.032540 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0731 23:12:04.033159 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:04.033677 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:04.033698 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:04.034080 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:04.034287 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .GetState
	I0731 23:12:04.036143 1211356 status.go:330] multinode-615814-m02 host status = "Running" (err=<nil>)
	I0731 23:12:04.036164 1211356 host.go:66] Checking if "multinode-615814-m02" exists ...
	I0731 23:12:04.036450 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:04.036498 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:04.052892 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46741
	I0731 23:12:04.053382 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:04.053911 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:04.053936 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:04.054302 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:04.054555 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .GetIP
	I0731 23:12:04.057644 1211356 main.go:141] libmachine: (multinode-615814-m02) DBG | domain multinode-615814-m02 has defined MAC address 52:54:00:bb:4d:70 in network mk-multinode-615814
	I0731 23:12:04.058056 1211356 main.go:141] libmachine: (multinode-615814-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:4d:70", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:10:20 +0000 UTC Type:0 Mac:52:54:00:bb:4d:70 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-615814-m02 Clientid:01:52:54:00:bb:4d:70}
	I0731 23:12:04.058086 1211356 main.go:141] libmachine: (multinode-615814-m02) DBG | domain multinode-615814-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:bb:4d:70 in network mk-multinode-615814
	I0731 23:12:04.058276 1211356 host.go:66] Checking if "multinode-615814-m02" exists ...
	I0731 23:12:04.058607 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:04.058664 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:04.074928 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0731 23:12:04.075403 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:04.075881 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:04.075904 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:04.076256 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:04.076476 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .DriverName
	I0731 23:12:04.076672 1211356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0731 23:12:04.076692 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .GetSSHHostname
	I0731 23:12:04.079507 1211356 main.go:141] libmachine: (multinode-615814-m02) DBG | domain multinode-615814-m02 has defined MAC address 52:54:00:bb:4d:70 in network mk-multinode-615814
	I0731 23:12:04.079958 1211356 main.go:141] libmachine: (multinode-615814-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:4d:70", ip: ""} in network mk-multinode-615814: {Iface:virbr1 ExpiryTime:2024-08-01 00:10:20 +0000 UTC Type:0 Mac:52:54:00:bb:4d:70 Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:multinode-615814-m02 Clientid:01:52:54:00:bb:4d:70}
	I0731 23:12:04.079995 1211356 main.go:141] libmachine: (multinode-615814-m02) DBG | domain multinode-615814-m02 has defined IP address 192.168.39.94 and MAC address 52:54:00:bb:4d:70 in network mk-multinode-615814
	I0731 23:12:04.080218 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .GetSSHPort
	I0731 23:12:04.080459 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .GetSSHKeyPath
	I0731 23:12:04.080631 1211356 main.go:141] libmachine: (multinode-615814-m02) Calling .GetSSHUsername
	I0731 23:12:04.080788 1211356 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19312-1172186/.minikube/machines/multinode-615814-m02/id_rsa Username:docker}
	I0731 23:12:04.159201 1211356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0731 23:12:04.173118 1211356 status.go:257] multinode-615814-m02 status: &{Name:multinode-615814-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0731 23:12:04.173174 1211356 status.go:255] checking status of multinode-615814-m03 ...
	I0731 23:12:04.173518 1211356 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0731 23:12:04.173550 1211356 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0731 23:12:04.189936 1211356 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34339
	I0731 23:12:04.190482 1211356 main.go:141] libmachine: () Calling .GetVersion
	I0731 23:12:04.190993 1211356 main.go:141] libmachine: Using API Version  1
	I0731 23:12:04.191017 1211356 main.go:141] libmachine: () Calling .SetConfigRaw
	I0731 23:12:04.191375 1211356 main.go:141] libmachine: () Calling .GetMachineName
	I0731 23:12:04.191584 1211356 main.go:141] libmachine: (multinode-615814-m03) Calling .GetState
	I0731 23:12:04.193335 1211356 status.go:330] multinode-615814-m03 host status = "Stopped" (err=<nil>)
	I0731 23:12:04.193355 1211356 status.go:343] host is not running, skipping remaining checks
	I0731 23:12:04.193363 1211356 status.go:257] multinode-615814-m03 status: &{Name:multinode-615814-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-615814 node start m03 -v=7 --alsologtostderr: (38.759344327s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-615814 node delete m03: (1.836942503s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (183.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615814 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615814 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m3.171013819s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-615814 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (183.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-615814
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615814-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-615814-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (66.594087ms)

                                                
                                                
-- stdout --
	* [multinode-615814-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-615814-m02' is duplicated with machine name 'multinode-615814-m02' in profile 'multinode-615814'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-615814-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-615814-m03 --driver=kvm2  --container-runtime=crio: (47.423034781s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-615814
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-615814: exit status 80 (221.811674ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-615814 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-615814-m03 already exists in multinode-615814-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-615814-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-615814-m03: (1.048705942s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.81s)

                                                
                                    
x
+
TestScheduledStopUnix (111.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-702146 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-702146 --memory=2048 --driver=kvm2  --container-runtime=crio: (39.927402016s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702146 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-702146 -n scheduled-stop-702146
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702146 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702146 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-702146 -n scheduled-stop-702146
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-702146
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702146 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-702146
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-702146: exit status 7 (68.011109ms)

                                                
                                                
-- stdout --
	scheduled-stop-702146
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-702146 -n scheduled-stop-702146
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-702146 -n scheduled-stop-702146: exit status 7 (68.490802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-702146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-702146
--- PASS: TestScheduledStopUnix (111.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (133.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3510240120 start -p running-upgrade-524949 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3510240120 start -p running-upgrade-524949 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (52.642118905s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-524949 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-524949 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.984861087s)
helpers_test.go:175: Cleaning up "running-upgrade-524949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-524949
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-524949: (1.294264289s)
--- PASS: TestRunningBinaryUpgrade (133.34s)

                                                
                                    
x
+
TestPause/serial/Start (107.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-343154 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-343154 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m47.75594664s)
--- PASS: TestPause/serial/Start (107.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (172.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1302337163 start -p stopped-upgrade-692279 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0731 23:29:53.720939 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1302337163 start -p stopped-upgrade-692279 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.850671497s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1302337163 -p stopped-upgrade-692279 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1302337163 -p stopped-upgrade-692279 stop: (2.143935586s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-692279 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-692279 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.771294206s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (172.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-692279
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-741714 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-741714 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (65.917692ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-741714] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19312
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19312-1172186/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19312-1172186/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (58.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-741714 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-741714 --driver=kvm2  --container-runtime=crio: (58.065794519s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-741714 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (58.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (55.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-741714 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-741714 --no-kubernetes --driver=kvm2  --container-runtime=crio: (54.608865121s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-741714 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-741714 status -o json: exit status 2 (262.873456ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-741714","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-741714
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-741714: (1.079107007s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (55.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-741714 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-741714 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.975032069s)
--- PASS: TestNoKubernetes/serial/Start (42.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-741714 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-741714 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.52595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-741714
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-741714: (1.307767839s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-741714 --driver=kvm2  --container-runtime=crio
E0731 23:34:53.721281 1179400 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19312-1172186/.minikube/profiles/functional-754682/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-741714 --driver=kvm2  --container-runtime=crio: (47.344174725s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-741714 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-741714 "sudo systemctl is-active --quiet service kubelet": exit status 1 (223.534423ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    

Test skip (30/216)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard